Test Report: QEMU_macOS 18804

                    
                      3f87824b0e7c024b0b0e0095d3da0d45809b8090:2024-05-07:34370
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.8
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.95
27 TestAddons/Setup 10.46
28 TestCertOptions 10.1
29 TestCertExpiration 195.14
30 TestDockerFlags 10.11
31 TestForceSystemdFlag 10.3
32 TestForceSystemdEnv 9.87
38 TestErrorSpam/setup 9.75
47 TestFunctional/serial/StartWithProxy 9.88
49 TestFunctional/serial/SoftStart 5.26
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
61 TestFunctional/serial/MinikubeKubectlCmd 0.65
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.94
63 TestFunctional/serial/ExtraConfig 5.26
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.28
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.28
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
102 TestFunctional/parallel/DockerEnv/bash 0.04
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.04
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
110 TestFunctional/parallel/ServiceCmd/Format 0.04
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 87.06
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.35
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.33
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.44
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 34.19
141 TestMultiControlPlane/serial/StartCluster 9.77
142 TestMultiControlPlane/serial/DeployApp 115.25
143 TestMultiControlPlane/serial/PingHostFromPods 0.08
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
150 TestMultiControlPlane/serial/RestartSecondaryNode 39.95
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.72
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
155 TestMultiControlPlane/serial/StopCluster 3.34
156 TestMultiControlPlane/serial/RestartCluster 5.25
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.1
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.1
162 TestImageBuild/serial/Setup 9.8
165 TestJSONOutput/start/Command 9.85
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.2
197 TestMountStart/serial/StartWithMountFirst 9.87
200 TestMultiNode/serial/FreshStart2Nodes 9.85
201 TestMultiNode/serial/DeployApp2Nodes 98.37
202 TestMultiNode/serial/PingHostFrom2Pods 0.08
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.1
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.13
208 TestMultiNode/serial/StartAfterStop 50.42
209 TestMultiNode/serial/RestartKeepsNodes 8.65
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.24
212 TestMultiNode/serial/RestartMultiNode 5.25
213 TestMultiNode/serial/ValidateNameConflict 20.35
217 TestPreload 9.96
219 TestScheduledStopUnix 9.89
220 TestSkaffold 12.8
223 TestRunningBinaryUpgrade 598.16
225 TestKubernetesUpgrade 18.24
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.02
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.96
241 TestStoppedBinaryUpgrade/Upgrade 579.19
243 TestPause/serial/Start 9.93
253 TestNoKubernetes/serial/StartWithK8s 9.8
254 TestNoKubernetes/serial/StartWithStopK8s 5.27
255 TestNoKubernetes/serial/Start 5.29
259 TestNoKubernetes/serial/StartNoArgs 5.33
261 TestNetworkPlugins/group/auto/Start 10.08
262 TestNetworkPlugins/group/calico/Start 9.86
263 TestNetworkPlugins/group/custom-flannel/Start 9.78
264 TestNetworkPlugins/group/false/Start 9.76
265 TestNetworkPlugins/group/kindnet/Start 9.81
266 TestNetworkPlugins/group/flannel/Start 9.96
267 TestNetworkPlugins/group/enable-default-cni/Start 9.82
268 TestNetworkPlugins/group/bridge/Start 9.82
269 TestNetworkPlugins/group/kubenet/Start 9.84
271 TestStartStop/group/old-k8s-version/serial/FirstStart 10
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.23
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.07
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
281 TestStartStop/group/old-k8s-version/serial/Pause 0.12
283 TestStartStop/group/no-preload/serial/FirstStart 9.73
284 TestStartStop/group/no-preload/serial/DeployApp 0.08
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
288 TestStartStop/group/no-preload/serial/SecondStart 5.26
290 TestStartStop/group/embed-certs/serial/FirstStart 9.98
291 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
292 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
294 TestStartStop/group/no-preload/serial/Pause 0.1
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.78
297 TestStartStop/group/embed-certs/serial/DeployApp 0.09
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
301 TestStartStop/group/embed-certs/serial/SecondStart 5.26
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/embed-certs/serial/Pause 0.1
312 TestStartStop/group/newest-cni/serial/FirstStart 9.99
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/SecondStart 5.25
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (13.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-931000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-931000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.798823125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4c6f0214-a65a-4217-8bbe-c9981a3dc5e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-931000] minikube v1.33.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"adfe6d84-aa7e-4c6a-a110-c7eadc4db72c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18804"}}
	{"specversion":"1.0","id":"75016ece-e888-4e5c-a5cf-917a30c8262e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig"}}
	{"specversion":"1.0","id":"a52b1e09-7bb5-49ef-8258-e7aa23b806f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ac6987c6-812c-4eee-83d3-cd6b8e0f9341","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7f2593d6-caff-47c0-83ae-b8b727977ccd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube"}}
	{"specversion":"1.0","id":"e56fcf10-8c06-48e5-972e-1477f397baa1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"04343509-a0df-45ca-8677-df0e28fd1d75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9eb8fc8-973a-4700-b640-5679f6239985","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"422d8505-4286-4e32-8d27-3acbf27ecbf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"361c67e2-df52-400b-9722-54859a9689f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-931000\" primary control-plane node in \"download-only-931000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3542231c-47f5-4c88-ab75-17f2388a62f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ebcf5bc0-7a41-440c-b90f-cd9743238df1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107090e00 0x107090e00 0x107090e00 0x107090e00 0x107090e00 0x107090e00 0x107090e00] Decompressors:map[bz2:0x140006d3ae0 gz:0x140006d3ae8 tar:0x140006d3a80 tar.bz2:0x140006d3a90 tar.gz:0x140006d3ab0 tar.xz:0x140006d3ac0 tar.zst:0x140006d3ad0 tbz2:0x140006d3a90 tgz:0x14
0006d3ab0 txz:0x140006d3ac0 tzst:0x140006d3ad0 xz:0x140006d3d90 zip:0x140006d3dc0 zst:0x140006d3d98] Getters:map[file:0x140015c8880 http:0x140005c8230 https:0x140005c8280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"1fed25b8-3d63-4a32-b450-838fd051c857","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 10:56:59.853357    9424 out.go:291] Setting OutFile to fd 1 ...
	I0507 10:56:59.853516    9424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:56:59.853520    9424 out.go:304] Setting ErrFile to fd 2...
	I0507 10:56:59.853522    9424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:56:59.853666    9424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	W0507 10:56:59.853757    9424 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18804-8175/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18804-8175/.minikube/config/config.json: no such file or directory
	I0507 10:56:59.855067    9424 out.go:298] Setting JSON to true
	I0507 10:56:59.872590    9424 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5190,"bootTime":1715099429,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 10:56:59.872654    9424 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 10:56:59.875884    9424 out.go:97] [download-only-931000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 10:56:59.880143    9424 out.go:169] MINIKUBE_LOCATION=18804
	I0507 10:56:59.876061    9424 notify.go:220] Checking for updates...
	W0507 10:56:59.876120    9424 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball: no such file or directory
	I0507 10:56:59.886984    9424 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 10:56:59.890190    9424 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 10:56:59.893537    9424 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 10:56:59.895141    9424 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	W0507 10:56:59.901409    9424 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0507 10:56:59.901626    9424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 10:56:59.905120    9424 out.go:97] Using the qemu2 driver based on user configuration
	I0507 10:56:59.905137    9424 start.go:297] selected driver: qemu2
	I0507 10:56:59.905151    9424 start.go:901] validating driver "qemu2" against <nil>
	I0507 10:56:59.905211    9424 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 10:56:59.908032    9424 out.go:169] Automatically selected the socket_vmnet network
	I0507 10:56:59.913345    9424 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0507 10:56:59.913444    9424 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0507 10:56:59.913472    9424 cni.go:84] Creating CNI manager for ""
	I0507 10:56:59.913492    9424 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0507 10:56:59.913551    9424 start.go:340] cluster config:
	{Name:download-only-931000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-931000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 10:56:59.917982    9424 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 10:56:59.921482    9424 out.go:97] Downloading VM boot image ...
	I0507 10:56:59.921507    9424 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso
	I0507 10:57:05.088110    9424 out.go:97] Starting "download-only-931000" primary control-plane node in "download-only-931000" cluster
	I0507 10:57:05.088132    9424 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0507 10:57:05.148212    9424 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0507 10:57:05.148217    9424 cache.go:56] Caching tarball of preloaded images
	I0507 10:57:05.148389    9424 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0507 10:57:05.153267    9424 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0507 10:57:05.153274    9424 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0507 10:57:05.237144    9424 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0507 10:57:12.518787    9424 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0507 10:57:12.518960    9424 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0507 10:57:13.217097    9424 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0507 10:57:13.217296    9424 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/download-only-931000/config.json ...
	I0507 10:57:13.217316    9424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/download-only-931000/config.json: {Name:mkde7b5a354249061a21034a86d309e14beb0a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 10:57:13.218786    9424 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0507 10:57:13.218967    9424 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0507 10:57:13.571734    9424 out.go:169] 
	W0507 10:57:13.577853    9424 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107090e00 0x107090e00 0x107090e00 0x107090e00 0x107090e00 0x107090e00 0x107090e00] Decompressors:map[bz2:0x140006d3ae0 gz:0x140006d3ae8 tar:0x140006d3a80 tar.bz2:0x140006d3a90 tar.gz:0x140006d3ab0 tar.xz:0x140006d3ac0 tar.zst:0x140006d3ad0 tbz2:0x140006d3a90 tgz:0x140006d3ab0 txz:0x140006d3ac0 tzst:0x140006d3ad0 xz:0x140006d3d90 zip:0x140006d3dc0 zst:0x140006d3d98] Getters:map[file:0x140015c8880 http:0x140005c8230 https:0x140005c8280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0507 10:57:13.577878    9424 out_reason.go:110] 
	W0507 10:57:13.586717    9424 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 10:57:13.590790    9424 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-931000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (13.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-287000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-287000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.815797708s)

                                                
                                                
-- stdout --
	* [offline-docker-287000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-287000" primary control-plane node in "offline-docker-287000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-287000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:08:27.037240   11341 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:08:27.037376   11341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:08:27.037380   11341 out.go:304] Setting ErrFile to fd 2...
	I0507 11:08:27.037382   11341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:08:27.037505   11341 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:08:27.038835   11341 out.go:298] Setting JSON to false
	I0507 11:08:27.056567   11341 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5878,"bootTime":1715099429,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:08:27.056649   11341 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:08:27.062235   11341 out.go:177] * [offline-docker-287000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:08:27.070283   11341 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:08:27.070306   11341 notify.go:220] Checking for updates...
	I0507 11:08:27.077152   11341 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:08:27.080241   11341 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:08:27.083267   11341 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:08:27.086141   11341 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:08:27.089183   11341 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:08:27.092556   11341 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:08:27.092621   11341 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:08:27.096155   11341 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:08:27.103248   11341 start.go:297] selected driver: qemu2
	I0507 11:08:27.103258   11341 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:08:27.103267   11341 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:08:27.105292   11341 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:08:27.109183   11341 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:08:27.112323   11341 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:08:27.112341   11341 cni.go:84] Creating CNI manager for ""
	I0507 11:08:27.112348   11341 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:08:27.112362   11341 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 11:08:27.112395   11341 start.go:340] cluster config:
	{Name:offline-docker-287000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-287000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:08:27.116940   11341 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:08:27.123198   11341 out.go:177] * Starting "offline-docker-287000" primary control-plane node in "offline-docker-287000" cluster
	I0507 11:08:27.127210   11341 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:08:27.127247   11341 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:08:27.127257   11341 cache.go:56] Caching tarball of preloaded images
	I0507 11:08:27.127332   11341 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:08:27.127338   11341 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:08:27.127405   11341 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/offline-docker-287000/config.json ...
	I0507 11:08:27.127416   11341 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/offline-docker-287000/config.json: {Name:mkdbc5865a7f67482e54d758d9c34517153bb9dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:08:27.127734   11341 start.go:360] acquireMachinesLock for offline-docker-287000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:08:27.127777   11341 start.go:364] duration metric: took 31.5µs to acquireMachinesLock for "offline-docker-287000"
	I0507 11:08:27.127788   11341 start.go:93] Provisioning new machine with config: &{Name:offline-docker-287000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-287000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:08:27.127821   11341 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:08:27.132266   11341 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0507 11:08:27.147857   11341 start.go:159] libmachine.API.Create for "offline-docker-287000" (driver="qemu2")
	I0507 11:08:27.147887   11341 client.go:168] LocalClient.Create starting
	I0507 11:08:27.147962   11341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:08:27.147995   11341 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:27.148003   11341 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:27.148046   11341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:08:27.148068   11341 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:27.148079   11341 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:27.148425   11341 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:08:27.275371   11341 main.go:141] libmachine: Creating SSH key...
	I0507 11:08:27.419448   11341 main.go:141] libmachine: Creating Disk image...
	I0507 11:08:27.419456   11341 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:08:27.419618   11341 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/disk.qcow2
	I0507 11:08:27.432799   11341 main.go:141] libmachine: STDOUT: 
	I0507 11:08:27.432827   11341 main.go:141] libmachine: STDERR: 
	I0507 11:08:27.432896   11341 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/disk.qcow2 +20000M
	I0507 11:08:27.445513   11341 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:08:27.445531   11341 main.go:141] libmachine: STDERR: 
	I0507 11:08:27.445552   11341 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/disk.qcow2
	I0507 11:08:27.445557   11341 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:08:27.445590   11341 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:a3:30:ce:43:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/disk.qcow2
	I0507 11:08:27.447440   11341 main.go:141] libmachine: STDOUT: 
	I0507 11:08:27.447456   11341 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:08:27.447481   11341 client.go:171] duration metric: took 299.600125ms to LocalClient.Create
	I0507 11:08:29.449479   11341 start.go:128] duration metric: took 2.321733291s to createHost
	I0507 11:08:29.449498   11341 start.go:83] releasing machines lock for "offline-docker-287000", held for 2.321798583s
	W0507 11:08:29.449511   11341 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:08:29.461014   11341 out.go:177] * Deleting "offline-docker-287000" in qemu2 ...
	W0507 11:08:29.469315   11341 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:08:29.469324   11341 start.go:728] Will try again in 5 seconds ...
	I0507 11:08:34.471389   11341 start.go:360] acquireMachinesLock for offline-docker-287000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:08:34.471853   11341 start.go:364] duration metric: took 362.25µs to acquireMachinesLock for "offline-docker-287000"
	I0507 11:08:34.472021   11341 start.go:93] Provisioning new machine with config: &{Name:offline-docker-287000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-287000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:08:34.472393   11341 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:08:34.477151   11341 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0507 11:08:34.528006   11341 start.go:159] libmachine.API.Create for "offline-docker-287000" (driver="qemu2")
	I0507 11:08:34.528064   11341 client.go:168] LocalClient.Create starting
	I0507 11:08:34.528175   11341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:08:34.528247   11341 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:34.528261   11341 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:34.528326   11341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:08:34.528369   11341 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:34.528380   11341 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:34.528977   11341 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:08:34.666598   11341 main.go:141] libmachine: Creating SSH key...
	I0507 11:08:34.755801   11341 main.go:141] libmachine: Creating Disk image...
	I0507 11:08:34.755806   11341 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:08:34.755974   11341 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/disk.qcow2
	I0507 11:08:34.768325   11341 main.go:141] libmachine: STDOUT: 
	I0507 11:08:34.768345   11341 main.go:141] libmachine: STDERR: 
	I0507 11:08:34.768399   11341 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/disk.qcow2 +20000M
	I0507 11:08:34.779192   11341 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:08:34.779209   11341 main.go:141] libmachine: STDERR: 
	I0507 11:08:34.779223   11341 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/disk.qcow2
	I0507 11:08:34.779228   11341 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:08:34.779263   11341 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:b3:35:04:90:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/offline-docker-287000/disk.qcow2
	I0507 11:08:34.780984   11341 main.go:141] libmachine: STDOUT: 
	I0507 11:08:34.781002   11341 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:08:34.781022   11341 client.go:171] duration metric: took 252.961292ms to LocalClient.Create
	I0507 11:08:36.783133   11341 start.go:128] duration metric: took 2.310777625s to createHost
	I0507 11:08:36.783247   11341 start.go:83] releasing machines lock for "offline-docker-287000", held for 2.31140575s
	W0507 11:08:36.783376   11341 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-287000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-287000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:08:36.794641   11341 out.go:177] 
	W0507 11:08:36.797002   11341 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:08:36.797015   11341 out.go:239] * 
	* 
	W0507 11:08:36.797785   11341 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:08:36.804657   11341 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-287000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-05-07 11:08:36.826226 -0700 PDT m=+697.080147001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-287000 -n offline-docker-287000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-287000 -n offline-docker-287000: exit status 7 (40.087125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-287000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-287000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-287000
--- FAIL: TestOffline (9.95s)

                                                
                                    
x
+
TestAddons/Setup (10.46s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-189000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-189000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.45936925s)

                                                
                                                
-- stdout --
	* [addons-189000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-189000" primary control-plane node in "addons-189000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-189000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 10:57:23.151665    9548 out.go:291] Setting OutFile to fd 1 ...
	I0507 10:57:23.151815    9548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:57:23.151818    9548 out.go:304] Setting ErrFile to fd 2...
	I0507 10:57:23.151821    9548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:57:23.151938    9548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 10:57:23.153026    9548 out.go:298] Setting JSON to false
	I0507 10:57:23.169234    9548 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5214,"bootTime":1715099429,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 10:57:23.169307    9548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 10:57:23.173587    9548 out.go:177] * [addons-189000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 10:57:23.180485    9548 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 10:57:23.180544    9548 notify.go:220] Checking for updates...
	I0507 10:57:23.187569    9548 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 10:57:23.190561    9548 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 10:57:23.193461    9548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 10:57:23.196530    9548 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 10:57:23.199465    9548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 10:57:23.202714    9548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 10:57:23.206641    9548 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 10:57:23.213458    9548 start.go:297] selected driver: qemu2
	I0507 10:57:23.213466    9548 start.go:901] validating driver "qemu2" against <nil>
	I0507 10:57:23.213472    9548 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 10:57:23.215738    9548 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 10:57:23.218670    9548 out.go:177] * Automatically selected the socket_vmnet network
	I0507 10:57:23.221567    9548 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 10:57:23.221588    9548 cni.go:84] Creating CNI manager for ""
	I0507 10:57:23.221596    9548 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 10:57:23.221602    9548 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 10:57:23.221643    9548 start.go:340] cluster config:
	{Name:addons-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 10:57:23.226213    9548 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 10:57:23.234555    9548 out.go:177] * Starting "addons-189000" primary control-plane node in "addons-189000" cluster
	I0507 10:57:23.238459    9548 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 10:57:23.238476    9548 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 10:57:23.238487    9548 cache.go:56] Caching tarball of preloaded images
	I0507 10:57:23.238545    9548 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 10:57:23.238550    9548 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 10:57:23.238765    9548 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/addons-189000/config.json ...
	I0507 10:57:23.238777    9548 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/addons-189000/config.json: {Name:mk1898b43b30cf2fbf63d8b9032efdb6385fe2ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 10:57:23.238999    9548 start.go:360] acquireMachinesLock for addons-189000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 10:57:23.239065    9548 start.go:364] duration metric: took 60.375µs to acquireMachinesLock for "addons-189000"
	I0507 10:57:23.239078    9548 start.go:93] Provisioning new machine with config: &{Name:addons-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:addons-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 10:57:23.239111    9548 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 10:57:23.245461    9548 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0507 10:57:23.266800    9548 start.go:159] libmachine.API.Create for "addons-189000" (driver="qemu2")
	I0507 10:57:23.266830    9548 client.go:168] LocalClient.Create starting
	I0507 10:57:23.266953    9548 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 10:57:23.361455    9548 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 10:57:23.796327    9548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 10:57:24.029844    9548 main.go:141] libmachine: Creating SSH key...
	I0507 10:57:24.126575    9548 main.go:141] libmachine: Creating Disk image...
	I0507 10:57:24.126583    9548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 10:57:24.126750    9548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/disk.qcow2
	I0507 10:57:24.139042    9548 main.go:141] libmachine: STDOUT: 
	I0507 10:57:24.139064    9548 main.go:141] libmachine: STDERR: 
	I0507 10:57:24.139130    9548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/disk.qcow2 +20000M
	I0507 10:57:24.150093    9548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 10:57:24.150127    9548 main.go:141] libmachine: STDERR: 
	I0507 10:57:24.150145    9548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/disk.qcow2
	I0507 10:57:24.150150    9548 main.go:141] libmachine: Starting QEMU VM...
	I0507 10:57:24.150187    9548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:1b:c4:2a:59:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/disk.qcow2
	I0507 10:57:24.151964    9548 main.go:141] libmachine: STDOUT: 
	I0507 10:57:24.151979    9548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 10:57:24.152003    9548 client.go:171] duration metric: took 885.199542ms to LocalClient.Create
	I0507 10:57:26.154168    9548 start.go:128] duration metric: took 2.915128125s to createHost
	I0507 10:57:26.154243    9548 start.go:83] releasing machines lock for "addons-189000", held for 2.91527075s
	W0507 10:57:26.154316    9548 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 10:57:26.172761    9548 out.go:177] * Deleting "addons-189000" in qemu2 ...
	W0507 10:57:26.197404    9548 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 10:57:26.197462    9548 start.go:728] Will try again in 5 seconds ...
	I0507 10:57:31.199524    9548 start.go:360] acquireMachinesLock for addons-189000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 10:57:31.200042    9548 start.go:364] duration metric: took 406.875µs to acquireMachinesLock for "addons-189000"
	I0507 10:57:31.200162    9548 start.go:93] Provisioning new machine with config: &{Name:addons-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:addons-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 10:57:31.200466    9548 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 10:57:31.214148    9548 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0507 10:57:31.256943    9548 start.go:159] libmachine.API.Create for "addons-189000" (driver="qemu2")
	I0507 10:57:31.257024    9548 client.go:168] LocalClient.Create starting
	I0507 10:57:31.257230    9548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 10:57:31.257308    9548 main.go:141] libmachine: Decoding PEM data...
	I0507 10:57:31.257329    9548 main.go:141] libmachine: Parsing certificate...
	I0507 10:57:31.257432    9548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 10:57:31.257485    9548 main.go:141] libmachine: Decoding PEM data...
	I0507 10:57:31.257501    9548 main.go:141] libmachine: Parsing certificate...
	I0507 10:57:31.258072    9548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 10:57:31.432955    9548 main.go:141] libmachine: Creating SSH key...
	I0507 10:57:31.512743    9548 main.go:141] libmachine: Creating Disk image...
	I0507 10:57:31.512751    9548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 10:57:31.512906    9548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/disk.qcow2
	I0507 10:57:31.524976    9548 main.go:141] libmachine: STDOUT: 
	I0507 10:57:31.524996    9548 main.go:141] libmachine: STDERR: 
	I0507 10:57:31.525045    9548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/disk.qcow2 +20000M
	I0507 10:57:31.536285    9548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 10:57:31.536304    9548 main.go:141] libmachine: STDERR: 
	I0507 10:57:31.536329    9548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/disk.qcow2
	I0507 10:57:31.536332    9548 main.go:141] libmachine: Starting QEMU VM...
	I0507 10:57:31.536369    9548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:16:1f:e5:83:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/addons-189000/disk.qcow2
	I0507 10:57:31.538185    9548 main.go:141] libmachine: STDOUT: 
	I0507 10:57:31.538201    9548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 10:57:31.538215    9548 client.go:171] duration metric: took 281.167834ms to LocalClient.Create
	I0507 10:57:33.540398    9548 start.go:128] duration metric: took 2.339952667s to createHost
	I0507 10:57:33.540497    9548 start.go:83] releasing machines lock for "addons-189000", held for 2.340511666s
	W0507 10:57:33.541022    9548 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 10:57:33.552569    9548 out.go:177] 
	W0507 10:57:33.557634    9548 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 10:57:33.557688    9548 out.go:239] * 
	* 
	W0507 10:57:33.560224    9548 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 10:57:33.569508    9548 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-189000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.46s)

                                                
                                    
x
+
TestCertOptions (10.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-048000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-048000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.822131042s)

                                                
                                                
-- stdout --
	* [cert-options-048000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-048000" primary control-plane node in "cert-options-048000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-048000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-048000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-048000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-048000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-048000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (81.110625ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-048000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-048000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-048000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-048000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-048000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-048000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.885667ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-048000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-048000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-048000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-048000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-048000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-05-07 11:09:06.916694 -0700 PDT m=+727.171678210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-048000 -n cert-options-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-048000 -n cert-options-048000: exit status 7 (29.460791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-048000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-048000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-048000
--- FAIL: TestCertOptions (10.10s)

                                                
                                    
x
+
TestCertExpiration (195.14s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-673000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-673000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.763202459s)

                                                
                                                
-- stdout --
	* [cert-expiration-673000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-673000" primary control-plane node in "cert-expiration-673000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-673000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-673000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-673000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-673000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-673000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.225352125s)

                                                
                                                
-- stdout --
	* [cert-expiration-673000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-673000" primary control-plane node in "cert-expiration-673000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-673000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-673000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-673000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-673000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-673000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-673000" primary control-plane node in "cert-expiration-673000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-673000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-673000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-673000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-05-07 11:12:06.93096 -0700 PDT m=+907.192306335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-673000 -n cert-expiration-673000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-673000 -n cert-expiration-673000: exit status 7 (48.545958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-673000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-673000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-673000
--- FAIL: TestCertExpiration (195.14s)

                                                
                                    
x
+
TestDockerFlags (10.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-297000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-297000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.860875292s)

                                                
                                                
-- stdout --
	* [docker-flags-297000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-297000" primary control-plane node in "docker-flags-297000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-297000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:08:46.858757   11550 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:08:46.858896   11550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:08:46.858899   11550 out.go:304] Setting ErrFile to fd 2...
	I0507 11:08:46.858901   11550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:08:46.859025   11550 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:08:46.860103   11550 out.go:298] Setting JSON to false
	I0507 11:08:46.876033   11550 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5897,"bootTime":1715099429,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:08:46.876095   11550 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:08:46.881421   11550 out.go:177] * [docker-flags-297000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:08:46.888349   11550 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:08:46.892360   11550 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:08:46.888392   11550 notify.go:220] Checking for updates...
	I0507 11:08:46.898294   11550 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:08:46.901365   11550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:08:46.904327   11550 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:08:46.907370   11550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:08:46.910650   11550 config.go:182] Loaded profile config "force-systemd-flag-303000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:08:46.910724   11550 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:08:46.910774   11550 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:08:46.915302   11550 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:08:46.922313   11550 start.go:297] selected driver: qemu2
	I0507 11:08:46.922319   11550 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:08:46.922334   11550 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:08:46.924625   11550 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:08:46.928341   11550 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:08:46.931386   11550 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0507 11:08:46.931403   11550 cni.go:84] Creating CNI manager for ""
	I0507 11:08:46.931409   11550 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:08:46.931413   11550 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 11:08:46.931445   11550 start.go:340] cluster config:
	{Name:docker-flags-297000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-297000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:08:46.936033   11550 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:08:46.943262   11550 out.go:177] * Starting "docker-flags-297000" primary control-plane node in "docker-flags-297000" cluster
	I0507 11:08:46.947329   11550 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:08:46.947344   11550 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:08:46.947352   11550 cache.go:56] Caching tarball of preloaded images
	I0507 11:08:46.947417   11550 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:08:46.947423   11550 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:08:46.947483   11550 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/docker-flags-297000/config.json ...
	I0507 11:08:46.947495   11550 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/docker-flags-297000/config.json: {Name:mk3dc72e4c9ee7daa290edde75601d3f9c51d38b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:08:46.947727   11550 start.go:360] acquireMachinesLock for docker-flags-297000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:08:46.947765   11550 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "docker-flags-297000"
	I0507 11:08:46.947797   11550 start.go:93] Provisioning new machine with config: &{Name:docker-flags-297000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-297000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:08:46.947827   11550 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:08:46.956316   11550 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0507 11:08:46.974141   11550 start.go:159] libmachine.API.Create for "docker-flags-297000" (driver="qemu2")
	I0507 11:08:46.974174   11550 client.go:168] LocalClient.Create starting
	I0507 11:08:46.974260   11550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:08:46.974289   11550 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:46.974299   11550 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:46.974342   11550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:08:46.974366   11550 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:46.974372   11550 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:46.974717   11550 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:08:47.093627   11550 main.go:141] libmachine: Creating SSH key...
	I0507 11:08:47.253773   11550 main.go:141] libmachine: Creating Disk image...
	I0507 11:08:47.253779   11550 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:08:47.253947   11550 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/disk.qcow2
	I0507 11:08:47.266830   11550 main.go:141] libmachine: STDOUT: 
	I0507 11:08:47.266854   11550 main.go:141] libmachine: STDERR: 
	I0507 11:08:47.266908   11550 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/disk.qcow2 +20000M
	I0507 11:08:47.277747   11550 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:08:47.277763   11550 main.go:141] libmachine: STDERR: 
	I0507 11:08:47.277775   11550 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/disk.qcow2
	I0507 11:08:47.277781   11550 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:08:47.277808   11550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:13:4d:35:0d:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/disk.qcow2
	I0507 11:08:47.279598   11550 main.go:141] libmachine: STDOUT: 
	I0507 11:08:47.279615   11550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:08:47.279633   11550 client.go:171] duration metric: took 305.463791ms to LocalClient.Create
	I0507 11:08:49.281733   11550 start.go:128] duration metric: took 2.333968s to createHost
	I0507 11:08:49.281787   11550 start.go:83] releasing machines lock for "docker-flags-297000", held for 2.334095083s
	W0507 11:08:49.281915   11550 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:08:49.297970   11550 out.go:177] * Deleting "docker-flags-297000" in qemu2 ...
	W0507 11:08:49.313757   11550 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:08:49.313782   11550 start.go:728] Will try again in 5 seconds ...
	I0507 11:08:54.315824   11550 start.go:360] acquireMachinesLock for docker-flags-297000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:08:54.335144   11550 start.go:364] duration metric: took 19.171083ms to acquireMachinesLock for "docker-flags-297000"
	I0507 11:08:54.335306   11550 start.go:93] Provisioning new machine with config: &{Name:docker-flags-297000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-297000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:08:54.335602   11550 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:08:54.341298   11550 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0507 11:08:54.389133   11550 start.go:159] libmachine.API.Create for "docker-flags-297000" (driver="qemu2")
	I0507 11:08:54.389176   11550 client.go:168] LocalClient.Create starting
	I0507 11:08:54.389343   11550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:08:54.389411   11550 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:54.389429   11550 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:54.389494   11550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:08:54.389538   11550 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:54.389550   11550 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:54.390202   11550 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:08:54.541592   11550 main.go:141] libmachine: Creating SSH key...
	I0507 11:08:54.612486   11550 main.go:141] libmachine: Creating Disk image...
	I0507 11:08:54.612491   11550 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:08:54.612669   11550 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/disk.qcow2
	I0507 11:08:54.625406   11550 main.go:141] libmachine: STDOUT: 
	I0507 11:08:54.625427   11550 main.go:141] libmachine: STDERR: 
	I0507 11:08:54.625492   11550 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/disk.qcow2 +20000M
	I0507 11:08:54.636287   11550 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:08:54.636304   11550 main.go:141] libmachine: STDERR: 
	I0507 11:08:54.636318   11550 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/disk.qcow2
	I0507 11:08:54.636323   11550 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:08:54.636357   11550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:0d:82:d6:fc:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/docker-flags-297000/disk.qcow2
	I0507 11:08:54.638044   11550 main.go:141] libmachine: STDOUT: 
	I0507 11:08:54.638071   11550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:08:54.638084   11550 client.go:171] duration metric: took 248.910375ms to LocalClient.Create
	I0507 11:08:56.640189   11550 start.go:128] duration metric: took 2.304637208s to createHost
	I0507 11:08:56.640231   11550 start.go:83] releasing machines lock for "docker-flags-297000", held for 2.305122584s
	W0507 11:08:56.640583   11550 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-297000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-297000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:08:56.655208   11550 out.go:177] 
	W0507 11:08:56.661399   11550 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:08:56.661426   11550 out.go:239] * 
	* 
	W0507 11:08:56.664124   11550 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:08:56.678220   11550 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-297000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-297000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-297000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.552459ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-297000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-297000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-297000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-297000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-297000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-297000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-297000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-297000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-297000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.265375ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-297000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-297000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-297000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-297000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-297000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-297000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-05-07 11:08:56.815082 -0700 PDT m=+717.069709460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-297000 -n docker-flags-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-297000 -n docker-flags-297000: exit status 7 (28.038291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-297000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-297000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-297000
--- FAIL: TestDockerFlags (10.11s)

                                                
                                    
x
+
TestForceSystemdFlag (10.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-303000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-303000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.078752083s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-303000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-303000" primary control-plane node in "force-systemd-flag-303000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-303000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:08:41.676615   11526 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:08:41.676737   11526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:08:41.676740   11526 out.go:304] Setting ErrFile to fd 2...
	I0507 11:08:41.676742   11526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:08:41.676894   11526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:08:41.677935   11526 out.go:298] Setting JSON to false
	I0507 11:08:41.693730   11526 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5892,"bootTime":1715099429,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:08:41.693792   11526 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:08:41.699225   11526 out.go:177] * [force-systemd-flag-303000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:08:41.706149   11526 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:08:41.711119   11526 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:08:41.706190   11526 notify.go:220] Checking for updates...
	I0507 11:08:41.715594   11526 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:08:41.719119   11526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:08:41.722773   11526 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:08:41.726127   11526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:08:41.729491   11526 config.go:182] Loaded profile config "force-systemd-env-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:08:41.729571   11526 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:08:41.729618   11526 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:08:41.734135   11526 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:08:41.741093   11526 start.go:297] selected driver: qemu2
	I0507 11:08:41.741100   11526 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:08:41.741106   11526 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:08:41.743417   11526 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:08:41.747171   11526 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:08:41.754157   11526 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0507 11:08:41.754185   11526 cni.go:84] Creating CNI manager for ""
	I0507 11:08:41.754194   11526 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:08:41.754201   11526 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 11:08:41.754227   11526 start.go:340] cluster config:
	{Name:force-systemd-flag-303000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-303000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:08:41.758799   11526 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:08:41.764375   11526 out.go:177] * Starting "force-systemd-flag-303000" primary control-plane node in "force-systemd-flag-303000" cluster
	I0507 11:08:41.768151   11526 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:08:41.768165   11526 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:08:41.768177   11526 cache.go:56] Caching tarball of preloaded images
	I0507 11:08:41.768233   11526 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:08:41.768238   11526 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:08:41.768301   11526 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/force-systemd-flag-303000/config.json ...
	I0507 11:08:41.768312   11526 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/force-systemd-flag-303000/config.json: {Name:mkf97e3de3deffbaec5f8936506bda17a4e45d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:08:41.768746   11526 start.go:360] acquireMachinesLock for force-systemd-flag-303000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:08:41.768780   11526 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "force-systemd-flag-303000"
	I0507 11:08:41.768792   11526 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-303000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-303000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:08:41.768819   11526 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:08:41.772499   11526 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0507 11:08:41.789379   11526 start.go:159] libmachine.API.Create for "force-systemd-flag-303000" (driver="qemu2")
	I0507 11:08:41.789405   11526 client.go:168] LocalClient.Create starting
	I0507 11:08:41.789467   11526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:08:41.789497   11526 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:41.789507   11526 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:41.789546   11526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:08:41.789568   11526 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:41.789578   11526 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:41.789896   11526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:08:41.909169   11526 main.go:141] libmachine: Creating SSH key...
	I0507 11:08:42.010541   11526 main.go:141] libmachine: Creating Disk image...
	I0507 11:08:42.010547   11526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:08:42.010737   11526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/disk.qcow2
	I0507 11:08:42.023604   11526 main.go:141] libmachine: STDOUT: 
	I0507 11:08:42.023629   11526 main.go:141] libmachine: STDERR: 
	I0507 11:08:42.023680   11526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/disk.qcow2 +20000M
	I0507 11:08:42.034496   11526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:08:42.034515   11526 main.go:141] libmachine: STDERR: 
	I0507 11:08:42.034530   11526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/disk.qcow2
	I0507 11:08:42.034534   11526 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:08:42.034584   11526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:1b:b2:e9:c0:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/disk.qcow2
	I0507 11:08:42.036351   11526 main.go:141] libmachine: STDOUT: 
	I0507 11:08:42.036366   11526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:08:42.036387   11526 client.go:171] duration metric: took 246.986167ms to LocalClient.Create
	I0507 11:08:44.038513   11526 start.go:128] duration metric: took 2.269748583s to createHost
	I0507 11:08:44.038601   11526 start.go:83] releasing machines lock for "force-systemd-flag-303000", held for 2.269888334s
	W0507 11:08:44.038661   11526 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:08:44.046055   11526 out.go:177] * Deleting "force-systemd-flag-303000" in qemu2 ...
	W0507 11:08:44.071673   11526 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:08:44.071703   11526 start.go:728] Will try again in 5 seconds ...
	I0507 11:08:49.073703   11526 start.go:360] acquireMachinesLock for force-systemd-flag-303000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:08:49.281961   11526 start.go:364] duration metric: took 208.171083ms to acquireMachinesLock for "force-systemd-flag-303000"
	I0507 11:08:49.282125   11526 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-303000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-303000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:08:49.282450   11526 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:08:49.290021   11526 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0507 11:08:49.337700   11526 start.go:159] libmachine.API.Create for "force-systemd-flag-303000" (driver="qemu2")
	I0507 11:08:49.337763   11526 client.go:168] LocalClient.Create starting
	I0507 11:08:49.337896   11526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:08:49.337962   11526 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:49.337979   11526 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:49.338048   11526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:08:49.338091   11526 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:49.338104   11526 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:49.338591   11526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:08:49.500821   11526 main.go:141] libmachine: Creating SSH key...
	I0507 11:08:49.655423   11526 main.go:141] libmachine: Creating Disk image...
	I0507 11:08:49.655430   11526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:08:49.655647   11526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/disk.qcow2
	I0507 11:08:49.668857   11526 main.go:141] libmachine: STDOUT: 
	I0507 11:08:49.668877   11526 main.go:141] libmachine: STDERR: 
	I0507 11:08:49.668925   11526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/disk.qcow2 +20000M
	I0507 11:08:49.679690   11526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:08:49.679712   11526 main.go:141] libmachine: STDERR: 
	I0507 11:08:49.679724   11526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/disk.qcow2
	I0507 11:08:49.679730   11526 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:08:49.679759   11526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:14:9b:1b:27:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-flag-303000/disk.qcow2
	I0507 11:08:49.681377   11526 main.go:141] libmachine: STDOUT: 
	I0507 11:08:49.681393   11526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:08:49.681406   11526 client.go:171] duration metric: took 343.6495ms to LocalClient.Create
	I0507 11:08:51.682406   11526 start.go:128] duration metric: took 2.399994917s to createHost
	I0507 11:08:51.687125   11526 start.go:83] releasing machines lock for "force-systemd-flag-303000", held for 2.405177208s
	W0507 11:08:51.687536   11526 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-303000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-303000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:08:51.696076   11526 out.go:177] 
	W0507 11:08:51.702977   11526 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:08:51.702999   11526 out.go:239] * 
	* 
	W0507 11:08:51.705782   11526 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:08:51.713910   11526 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-303000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-303000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-303000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (82.226833ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-303000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-303000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-303000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-05-07 11:08:51.813079 -0700 PDT m=+712.067528960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-303000 -n force-systemd-flag-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-303000 -n force-systemd-flag-303000: exit status 7 (34.444667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-303000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-303000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-303000
--- FAIL: TestForceSystemdFlag (10.30s)

                                                
                                    
x
+
TestForceSystemdEnv (9.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-484000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-484000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.667843792s)

                                                
                                                
-- stdout --
	* [force-systemd-env-484000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-484000" primary control-plane node in "force-systemd-env-484000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-484000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:08:36.987797   11497 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:08:36.987926   11497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:08:36.987929   11497 out.go:304] Setting ErrFile to fd 2...
	I0507 11:08:36.987932   11497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:08:36.988067   11497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:08:36.989135   11497 out.go:298] Setting JSON to false
	I0507 11:08:37.005585   11497 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5887,"bootTime":1715099429,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:08:37.005651   11497 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:08:37.009679   11497 out.go:177] * [force-systemd-env-484000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:08:37.016554   11497 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:08:37.020591   11497 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:08:37.016614   11497 notify.go:220] Checking for updates...
	I0507 11:08:37.025545   11497 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:08:37.028585   11497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:08:37.031702   11497 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:08:37.038565   11497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0507 11:08:37.042929   11497 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:08:37.042980   11497 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:08:37.047602   11497 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:08:37.054441   11497 start.go:297] selected driver: qemu2
	I0507 11:08:37.054446   11497 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:08:37.054451   11497 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:08:37.056550   11497 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:08:37.059550   11497 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:08:37.062619   11497 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0507 11:08:37.062632   11497 cni.go:84] Creating CNI manager for ""
	I0507 11:08:37.062638   11497 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:08:37.062642   11497 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 11:08:37.062671   11497 start.go:340] cluster config:
	{Name:force-systemd-env-484000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:08:37.066738   11497 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:08:37.073544   11497 out.go:177] * Starting "force-systemd-env-484000" primary control-plane node in "force-systemd-env-484000" cluster
	I0507 11:08:37.077591   11497 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:08:37.077603   11497 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:08:37.077611   11497 cache.go:56] Caching tarball of preloaded images
	I0507 11:08:37.077662   11497 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:08:37.077666   11497 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:08:37.077710   11497 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/force-systemd-env-484000/config.json ...
	I0507 11:08:37.077720   11497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/force-systemd-env-484000/config.json: {Name:mkb4c5f2012bc3123f4fccdd01370c8a879226f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:08:37.077919   11497 start.go:360] acquireMachinesLock for force-systemd-env-484000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:08:37.077949   11497 start.go:364] duration metric: took 24.458µs to acquireMachinesLock for "force-systemd-env-484000"
	I0507 11:08:37.077960   11497 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:08:37.077983   11497 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:08:37.086583   11497 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0507 11:08:37.101415   11497 start.go:159] libmachine.API.Create for "force-systemd-env-484000" (driver="qemu2")
	I0507 11:08:37.101448   11497 client.go:168] LocalClient.Create starting
	I0507 11:08:37.101516   11497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:08:37.101544   11497 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:37.101556   11497 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:37.101595   11497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:08:37.101617   11497 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:37.101622   11497 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:37.101958   11497 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:08:37.219497   11497 main.go:141] libmachine: Creating SSH key...
	I0507 11:08:37.258630   11497 main.go:141] libmachine: Creating Disk image...
	I0507 11:08:37.258636   11497 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:08:37.258809   11497 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/disk.qcow2
	I0507 11:08:37.271513   11497 main.go:141] libmachine: STDOUT: 
	I0507 11:08:37.271535   11497 main.go:141] libmachine: STDERR: 
	I0507 11:08:37.271596   11497 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/disk.qcow2 +20000M
	I0507 11:08:37.283127   11497 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:08:37.283158   11497 main.go:141] libmachine: STDERR: 
	I0507 11:08:37.283170   11497 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/disk.qcow2
	I0507 11:08:37.283174   11497 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:08:37.283206   11497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:dc:46:c7:af:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/disk.qcow2
	I0507 11:08:37.284979   11497 main.go:141] libmachine: STDOUT: 
	I0507 11:08:37.284996   11497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:08:37.285013   11497 client.go:171] duration metric: took 183.567208ms to LocalClient.Create
	I0507 11:08:39.287202   11497 start.go:128] duration metric: took 2.209260584s to createHost
	I0507 11:08:39.287279   11497 start.go:83] releasing machines lock for "force-systemd-env-484000", held for 2.209398042s
	W0507 11:08:39.287350   11497 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:08:39.294654   11497 out.go:177] * Deleting "force-systemd-env-484000" in qemu2 ...
	W0507 11:08:39.317102   11497 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:08:39.317128   11497 start.go:728] Will try again in 5 seconds ...
	I0507 11:08:44.319156   11497 start.go:360] acquireMachinesLock for force-systemd-env-484000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:08:44.319646   11497 start.go:364] duration metric: took 391.083µs to acquireMachinesLock for "force-systemd-env-484000"
	I0507 11:08:44.319800   11497 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:08:44.320068   11497 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:08:44.328688   11497 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0507 11:08:44.378075   11497 start.go:159] libmachine.API.Create for "force-systemd-env-484000" (driver="qemu2")
	I0507 11:08:44.378120   11497 client.go:168] LocalClient.Create starting
	I0507 11:08:44.378240   11497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:08:44.378307   11497 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:44.378327   11497 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:44.378399   11497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:08:44.378445   11497 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:44.378458   11497 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:44.379102   11497 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:08:44.509772   11497 main.go:141] libmachine: Creating SSH key...
	I0507 11:08:44.554893   11497 main.go:141] libmachine: Creating Disk image...
	I0507 11:08:44.554898   11497 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:08:44.555081   11497 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/disk.qcow2
	I0507 11:08:44.567656   11497 main.go:141] libmachine: STDOUT: 
	I0507 11:08:44.567679   11497 main.go:141] libmachine: STDERR: 
	I0507 11:08:44.567733   11497 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/disk.qcow2 +20000M
	I0507 11:08:44.578933   11497 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:08:44.578962   11497 main.go:141] libmachine: STDERR: 
	I0507 11:08:44.578994   11497 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/disk.qcow2
	I0507 11:08:44.578998   11497 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:08:44.579027   11497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:71:f1:43:92:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/force-systemd-env-484000/disk.qcow2
	I0507 11:08:44.580747   11497 main.go:141] libmachine: STDOUT: 
	I0507 11:08:44.580761   11497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:08:44.580775   11497 client.go:171] duration metric: took 202.655208ms to LocalClient.Create
	I0507 11:08:46.582883   11497 start.go:128] duration metric: took 2.2628615s to createHost
	I0507 11:08:46.582935   11497 start.go:83] releasing machines lock for "force-systemd-env-484000", held for 2.263344458s
	W0507 11:08:46.583304   11497 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-484000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-484000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:08:46.594900   11497 out.go:177] 
	W0507 11:08:46.598841   11497 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:08:46.598873   11497 out.go:239] * 
	* 
	W0507 11:08:46.601398   11497 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:08:46.610908   11497 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-484000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-484000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-484000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.137583ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-484000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-484000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-484000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-05-07 11:08:46.703369 -0700 PDT m=+706.957638251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-484000 -n force-systemd-env-484000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-484000 -n force-systemd-env-484000: exit status 7 (33.555958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-484000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-484000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-484000
--- FAIL: TestForceSystemdEnv (9.87s)

                                                
                                    
x
+
TestErrorSpam/setup (9.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-636000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-636000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 --driver=qemu2 : exit status 80 (9.749610625s)

                                                
                                                
-- stdout --
	* [nospam-636000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-636000" primary control-plane node in "nospam-636000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-636000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-636000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-636000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-636000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-636000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18804
- KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-636000" primary control-plane node in "nospam-636000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-636000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-636000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.75s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-642000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-642000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.810885458s)

                                                
                                                
-- stdout --
	* [functional-642000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-642000" primary control-plane node in "functional-642000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-642000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51062 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51062 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51062 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-642000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-642000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-642000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18804
- KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-642000" primary control-plane node in "functional-642000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-642000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51062 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51062 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51062 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-642000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (66.865625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.88s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-642000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-642000 --alsologtostderr -v=8: exit status 80 (5.188531375s)

                                                
                                                
-- stdout --
	* [functional-642000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-642000" primary control-plane node in "functional-642000" cluster
	* Restarting existing qemu2 VM for "functional-642000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-642000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 10:58:04.290922    9737 out.go:291] Setting OutFile to fd 1 ...
	I0507 10:58:04.291053    9737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:58:04.291056    9737 out.go:304] Setting ErrFile to fd 2...
	I0507 10:58:04.291059    9737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:58:04.291189    9737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 10:58:04.292194    9737 out.go:298] Setting JSON to false
	I0507 10:58:04.308012    9737 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5255,"bootTime":1715099429,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 10:58:04.308079    9737 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 10:58:04.313063    9737 out.go:177] * [functional-642000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 10:58:04.324886    9737 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 10:58:04.329062    9737 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 10:58:04.324926    9737 notify.go:220] Checking for updates...
	I0507 10:58:04.333426    9737 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 10:58:04.336063    9737 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 10:58:04.339062    9737 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 10:58:04.342140    9737 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 10:58:04.345386    9737 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 10:58:04.345445    9737 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 10:58:04.350010    9737 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 10:58:04.357009    9737 start.go:297] selected driver: qemu2
	I0507 10:58:04.357014    9737 start.go:901] validating driver "qemu2" against &{Name:functional-642000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-642000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 10:58:04.357077    9737 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 10:58:04.359219    9737 cni.go:84] Creating CNI manager for ""
	I0507 10:58:04.359236    9737 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 10:58:04.359284    9737 start.go:340] cluster config:
	{Name:functional-642000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-642000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 10:58:04.363595    9737 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 10:58:04.370975    9737 out.go:177] * Starting "functional-642000" primary control-plane node in "functional-642000" cluster
	I0507 10:58:04.375043    9737 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 10:58:04.375063    9737 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 10:58:04.375070    9737 cache.go:56] Caching tarball of preloaded images
	I0507 10:58:04.375133    9737 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 10:58:04.375140    9737 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 10:58:04.375206    9737 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/functional-642000/config.json ...
	I0507 10:58:04.375638    9737 start.go:360] acquireMachinesLock for functional-642000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 10:58:04.375668    9737 start.go:364] duration metric: took 23.042µs to acquireMachinesLock for "functional-642000"
	I0507 10:58:04.375678    9737 start.go:96] Skipping create...Using existing machine configuration
	I0507 10:58:04.375683    9737 fix.go:54] fixHost starting: 
	I0507 10:58:04.375806    9737 fix.go:112] recreateIfNeeded on functional-642000: state=Stopped err=<nil>
	W0507 10:58:04.375814    9737 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 10:58:04.384002    9737 out.go:177] * Restarting existing qemu2 VM for "functional-642000" ...
	I0507 10:58:04.388098    9737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:8c:c1:0a:bb:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/disk.qcow2
	I0507 10:58:04.390246    9737 main.go:141] libmachine: STDOUT: 
	I0507 10:58:04.390266    9737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 10:58:04.390297    9737 fix.go:56] duration metric: took 14.613333ms for fixHost
	I0507 10:58:04.390301    9737 start.go:83] releasing machines lock for "functional-642000", held for 14.62925ms
	W0507 10:58:04.390309    9737 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 10:58:04.390345    9737 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 10:58:04.390359    9737 start.go:728] Will try again in 5 seconds ...
	I0507 10:58:09.392400    9737 start.go:360] acquireMachinesLock for functional-642000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 10:58:09.392792    9737 start.go:364] duration metric: took 293.5µs to acquireMachinesLock for "functional-642000"
	I0507 10:58:09.392931    9737 start.go:96] Skipping create...Using existing machine configuration
	I0507 10:58:09.392954    9737 fix.go:54] fixHost starting: 
	I0507 10:58:09.393780    9737 fix.go:112] recreateIfNeeded on functional-642000: state=Stopped err=<nil>
	W0507 10:58:09.393806    9737 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 10:58:09.397443    9737 out.go:177] * Restarting existing qemu2 VM for "functional-642000" ...
	I0507 10:58:09.405391    9737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:8c:c1:0a:bb:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/disk.qcow2
	I0507 10:58:09.414819    9737 main.go:141] libmachine: STDOUT: 
	I0507 10:58:09.414879    9737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 10:58:09.415007    9737 fix.go:56] duration metric: took 22.057ms for fixHost
	I0507 10:58:09.415020    9737 start.go:83] releasing machines lock for "functional-642000", held for 22.207125ms
	W0507 10:58:09.415189    9737 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-642000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-642000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 10:58:09.422178    9737 out.go:177] 
	W0507 10:58:09.426304    9737 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 10:58:09.426335    9737 out.go:239] * 
	* 
	W0507 10:58:09.429032    9737 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 10:58:09.436254    9737 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-642000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.190258s for "functional-642000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (66.532875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (32.587417ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-642000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (29.1075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-642000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-642000 get po -A: exit status 1 (26.411209ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-642000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-642000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-642000\n"*: args "kubectl --context functional-642000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-642000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (29.84775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh sudo crictl images: exit status 83 (41.615ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-642000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (37.956917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-642000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (38.896833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (38.794208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-642000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 kubectl -- --context functional-642000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 kubectl -- --context functional-642000 get pods: exit status 1 (619.228292ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-642000
	* no server found for cluster "functional-642000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-642000 kubectl -- --context functional-642000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (30.915542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-642000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-642000 get pods: exit status 1 (912.218667ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-642000
	* no server found for cluster "functional-642000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-642000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (28.64075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.94s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-642000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-642000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.194485208s)

                                                
                                                
-- stdout --
	* [functional-642000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-642000" primary control-plane node in "functional-642000" cluster
	* Restarting existing qemu2 VM for "functional-642000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-642000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-642000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-642000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.195008542s for "functional-642000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (67.689166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-642000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-642000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.344916ms)

                                                
                                                
** stderr ** 
	error: context "functional-642000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-642000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (29.149375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 logs: exit status 83 (77.616542ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-931000 | jenkins | v1.33.0 | 07 May 24 10:56 PDT |                     |
	|         | -p download-only-931000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	| delete  | -p download-only-931000                                                  | download-only-931000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	| start   | -o=json --download-only                                                  | download-only-879000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | -p download-only-879000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	| delete  | -p download-only-879000                                                  | download-only-879000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	| delete  | -p download-only-931000                                                  | download-only-931000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	| delete  | -p download-only-879000                                                  | download-only-879000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	| start   | --download-only -p                                                       | binary-mirror-067000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | binary-mirror-067000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51025                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-067000                                                  | binary-mirror-067000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	| addons  | enable dashboard -p                                                      | addons-189000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | addons-189000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-189000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | addons-189000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-189000 --wait=true                                             | addons-189000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-189000                                                         | addons-189000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	| start   | -p nospam-636000 -n=1 --memory=2250 --wait=false                         | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-636000                                                         | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	| start   | -p functional-642000                                                     | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-642000                                                     | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-642000 cache add                                              | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-642000 cache add                                              | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-642000 cache add                                              | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-642000 cache add                                              | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
	|         | minikube-local-cache-test:functional-642000                              |                      |         |         |                     |                     |
	| cache   | functional-642000 cache delete                                           | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
	|         | minikube-local-cache-test:functional-642000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
	| ssh     | functional-642000 ssh sudo                                               | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-642000                                                        | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-642000 ssh                                                    | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-642000 cache reload                                           | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
	| ssh     | functional-642000 ssh                                                    | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-642000 kubectl --                                             | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
	|         | --context functional-642000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-642000                                                     | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/07 10:58:16
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 10:58:16.201933    9825 out.go:291] Setting OutFile to fd 1 ...
	I0507 10:58:16.202045    9825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:58:16.202047    9825 out.go:304] Setting ErrFile to fd 2...
	I0507 10:58:16.202049    9825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:58:16.202171    9825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 10:58:16.203169    9825 out.go:298] Setting JSON to false
	I0507 10:58:16.218677    9825 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5267,"bootTime":1715099429,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 10:58:16.218745    9825 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 10:58:16.225463    9825 out.go:177] * [functional-642000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 10:58:16.233427    9825 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 10:58:16.241440    9825 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 10:58:16.233481    9825 notify.go:220] Checking for updates...
	I0507 10:58:16.248446    9825 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 10:58:16.252488    9825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 10:58:16.255438    9825 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 10:58:16.258463    9825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 10:58:16.262716    9825 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 10:58:16.262791    9825 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 10:58:16.267572    9825 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 10:58:16.274448    9825 start.go:297] selected driver: qemu2
	I0507 10:58:16.274452    9825 start.go:901] validating driver "qemu2" against &{Name:functional-642000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-642000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 10:58:16.274503    9825 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 10:58:16.276884    9825 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 10:58:16.276908    9825 cni.go:84] Creating CNI manager for ""
	I0507 10:58:16.276914    9825 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 10:58:16.276968    9825 start.go:340] cluster config:
	{Name:functional-642000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-642000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 10:58:16.281591    9825 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 10:58:16.287452    9825 out.go:177] * Starting "functional-642000" primary control-plane node in "functional-642000" cluster
	I0507 10:58:16.291293    9825 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 10:58:16.291305    9825 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 10:58:16.291310    9825 cache.go:56] Caching tarball of preloaded images
	I0507 10:58:16.291368    9825 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 10:58:16.291372    9825 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 10:58:16.291434    9825 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/functional-642000/config.json ...
	I0507 10:58:16.291840    9825 start.go:360] acquireMachinesLock for functional-642000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 10:58:16.291890    9825 start.go:364] duration metric: took 45.833µs to acquireMachinesLock for "functional-642000"
	I0507 10:58:16.291899    9825 start.go:96] Skipping create...Using existing machine configuration
	I0507 10:58:16.291904    9825 fix.go:54] fixHost starting: 
	I0507 10:58:16.292038    9825 fix.go:112] recreateIfNeeded on functional-642000: state=Stopped err=<nil>
	W0507 10:58:16.292044    9825 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 10:58:16.299439    9825 out.go:177] * Restarting existing qemu2 VM for "functional-642000" ...
	I0507 10:58:16.304523    9825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:8c:c1:0a:bb:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/disk.qcow2
	I0507 10:58:16.306725    9825 main.go:141] libmachine: STDOUT: 
	I0507 10:58:16.306742    9825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 10:58:16.306773    9825 fix.go:56] duration metric: took 14.87025ms for fixHost
	I0507 10:58:16.306775    9825 start.go:83] releasing machines lock for "functional-642000", held for 14.882583ms
	W0507 10:58:16.306783    9825 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 10:58:16.306820    9825 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 10:58:16.306826    9825 start.go:728] Will try again in 5 seconds ...
	I0507 10:58:21.308644    9825 start.go:360] acquireMachinesLock for functional-642000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 10:58:21.309040    9825 start.go:364] duration metric: took 316.458µs to acquireMachinesLock for "functional-642000"
	I0507 10:58:21.309209    9825 start.go:96] Skipping create...Using existing machine configuration
	I0507 10:58:21.309226    9825 fix.go:54] fixHost starting: 
	I0507 10:58:21.309962    9825 fix.go:112] recreateIfNeeded on functional-642000: state=Stopped err=<nil>
	W0507 10:58:21.309984    9825 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 10:58:21.320678    9825 out.go:177] * Restarting existing qemu2 VM for "functional-642000" ...
	I0507 10:58:21.324739    9825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:8c:c1:0a:bb:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/disk.qcow2
	I0507 10:58:21.334457    9825 main.go:141] libmachine: STDOUT: 
	I0507 10:58:21.334523    9825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 10:58:21.334615    9825 fix.go:56] duration metric: took 25.387125ms for fixHost
	I0507 10:58:21.334625    9825 start.go:83] releasing machines lock for "functional-642000", held for 25.570625ms
	W0507 10:58:21.334913    9825 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-642000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 10:58:21.342685    9825 out.go:177] 
	W0507 10:58:21.346647    9825 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 10:58:21.346671    9825 out.go:239] * 
	W0507 10:58:21.349092    9825 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 10:58:21.357627    9825 out.go:177] 
	
	
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-642000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-931000 | jenkins | v1.33.0 | 07 May 24 10:56 PDT |                     |
|         | -p download-only-931000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| delete  | -p download-only-931000                                                  | download-only-931000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| start   | -o=json --download-only                                                  | download-only-879000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | -p download-only-879000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| delete  | -p download-only-879000                                                  | download-only-879000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| delete  | -p download-only-931000                                                  | download-only-931000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| delete  | -p download-only-879000                                                  | download-only-879000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| start   | --download-only -p                                                       | binary-mirror-067000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | binary-mirror-067000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51025                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-067000                                                  | binary-mirror-067000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| addons  | enable dashboard -p                                                      | addons-189000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | addons-189000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-189000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | addons-189000                                                            |                      |         |         |                     |                     |
| start   | -p addons-189000 --wait=true                                             | addons-189000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-189000                                                         | addons-189000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| start   | -p nospam-636000 -n=1 --memory=2250 --wait=false                         | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-636000                                                         | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| start   | -p functional-642000                                                     | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-642000                                                     | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-642000 cache add                                              | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-642000 cache add                                              | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-642000 cache add                                              | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-642000 cache add                                              | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | minikube-local-cache-test:functional-642000                              |                      |         |         |                     |                     |
| cache   | functional-642000 cache delete                                           | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | minikube-local-cache-test:functional-642000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
| ssh     | functional-642000 ssh sudo                                               | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-642000                                                        | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-642000 ssh                                                    | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-642000 cache reload                                           | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
| ssh     | functional-642000 ssh                                                    | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-642000 kubectl --                                             | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | --context functional-642000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-642000                                                     | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/05/07 10:58:16
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0507 10:58:16.201933    9825 out.go:291] Setting OutFile to fd 1 ...
I0507 10:58:16.202045    9825 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:58:16.202047    9825 out.go:304] Setting ErrFile to fd 2...
I0507 10:58:16.202049    9825 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:58:16.202171    9825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
I0507 10:58:16.203169    9825 out.go:298] Setting JSON to false
I0507 10:58:16.218677    9825 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5267,"bootTime":1715099429,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0507 10:58:16.218745    9825 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0507 10:58:16.225463    9825 out.go:177] * [functional-642000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
I0507 10:58:16.233427    9825 out.go:177]   - MINIKUBE_LOCATION=18804
I0507 10:58:16.241440    9825 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
I0507 10:58:16.233481    9825 notify.go:220] Checking for updates...
I0507 10:58:16.248446    9825 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0507 10:58:16.252488    9825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0507 10:58:16.255438    9825 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
I0507 10:58:16.258463    9825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0507 10:58:16.262716    9825 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 10:58:16.262791    9825 driver.go:392] Setting default libvirt URI to qemu:///system
I0507 10:58:16.267572    9825 out.go:177] * Using the qemu2 driver based on existing profile
I0507 10:58:16.274448    9825 start.go:297] selected driver: qemu2
I0507 10:58:16.274452    9825 start.go:901] validating driver "qemu2" against &{Name:functional-642000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:functional-642000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0507 10:58:16.274503    9825 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0507 10:58:16.276884    9825 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0507 10:58:16.276908    9825 cni.go:84] Creating CNI manager for ""
I0507 10:58:16.276914    9825 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0507 10:58:16.276968    9825 start.go:340] cluster config:
{Name:functional-642000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-642000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0507 10:58:16.281591    9825 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0507 10:58:16.287452    9825 out.go:177] * Starting "functional-642000" primary control-plane node in "functional-642000" cluster
I0507 10:58:16.291293    9825 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0507 10:58:16.291305    9825 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
I0507 10:58:16.291310    9825 cache.go:56] Caching tarball of preloaded images
I0507 10:58:16.291368    9825 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0507 10:58:16.291372    9825 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0507 10:58:16.291434    9825 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/functional-642000/config.json ...
I0507 10:58:16.291840    9825 start.go:360] acquireMachinesLock for functional-642000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0507 10:58:16.291890    9825 start.go:364] duration metric: took 45.833µs to acquireMachinesLock for "functional-642000"
I0507 10:58:16.291899    9825 start.go:96] Skipping create...Using existing machine configuration
I0507 10:58:16.291904    9825 fix.go:54] fixHost starting: 
I0507 10:58:16.292038    9825 fix.go:112] recreateIfNeeded on functional-642000: state=Stopped err=<nil>
W0507 10:58:16.292044    9825 fix.go:138] unexpected machine state, will restart: <nil>
I0507 10:58:16.299439    9825 out.go:177] * Restarting existing qemu2 VM for "functional-642000" ...
I0507 10:58:16.304523    9825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:8c:c1:0a:bb:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/disk.qcow2
I0507 10:58:16.306725    9825 main.go:141] libmachine: STDOUT: 
I0507 10:58:16.306742    9825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0507 10:58:16.306773    9825 fix.go:56] duration metric: took 14.87025ms for fixHost
I0507 10:58:16.306775    9825 start.go:83] releasing machines lock for "functional-642000", held for 14.882583ms
W0507 10:58:16.306783    9825 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0507 10:58:16.306820    9825 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0507 10:58:16.306826    9825 start.go:728] Will try again in 5 seconds ...
I0507 10:58:21.308644    9825 start.go:360] acquireMachinesLock for functional-642000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0507 10:58:21.309040    9825 start.go:364] duration metric: took 316.458µs to acquireMachinesLock for "functional-642000"
I0507 10:58:21.309209    9825 start.go:96] Skipping create...Using existing machine configuration
I0507 10:58:21.309226    9825 fix.go:54] fixHost starting: 
I0507 10:58:21.309962    9825 fix.go:112] recreateIfNeeded on functional-642000: state=Stopped err=<nil>
W0507 10:58:21.309984    9825 fix.go:138] unexpected machine state, will restart: <nil>
I0507 10:58:21.320678    9825 out.go:177] * Restarting existing qemu2 VM for "functional-642000" ...
I0507 10:58:21.324739    9825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:8c:c1:0a:bb:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/disk.qcow2
I0507 10:58:21.334457    9825 main.go:141] libmachine: STDOUT: 
I0507 10:58:21.334523    9825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0507 10:58:21.334615    9825 fix.go:56] duration metric: took 25.387125ms for fixHost
I0507 10:58:21.334625    9825 start.go:83] releasing machines lock for "functional-642000", held for 25.570625ms
W0507 10:58:21.334913    9825 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-642000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0507 10:58:21.342685    9825 out.go:177] 
W0507 10:58:21.346647    9825 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0507 10:58:21.346671    9825 out.go:239] * 
W0507 10:58:21.349092    9825 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0507 10:58:21.357627    9825 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3732381727/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-931000 | jenkins | v1.33.0 | 07 May 24 10:56 PDT |                     |
|         | -p download-only-931000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| delete  | -p download-only-931000                                                  | download-only-931000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| start   | -o=json --download-only                                                  | download-only-879000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | -p download-only-879000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| delete  | -p download-only-879000                                                  | download-only-879000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| delete  | -p download-only-931000                                                  | download-only-931000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| delete  | -p download-only-879000                                                  | download-only-879000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| start   | --download-only -p                                                       | binary-mirror-067000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | binary-mirror-067000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51025                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-067000                                                  | binary-mirror-067000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| addons  | enable dashboard -p                                                      | addons-189000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | addons-189000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-189000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | addons-189000                                                            |                      |         |         |                     |                     |
| start   | -p addons-189000 --wait=true                                             | addons-189000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-189000                                                         | addons-189000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| start   | -p nospam-636000 -n=1 --memory=2250 --wait=false                         | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-636000 --log_dir                                                  | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-636000                                                         | nospam-636000        | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
| start   | -p functional-642000                                                     | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-642000                                                     | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-642000 cache add                                              | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-642000 cache add                                              | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-642000 cache add                                              | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-642000 cache add                                              | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | minikube-local-cache-test:functional-642000                              |                      |         |         |                     |                     |
| cache   | functional-642000 cache delete                                           | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | minikube-local-cache-test:functional-642000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
| ssh     | functional-642000 ssh sudo                                               | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-642000                                                        | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-642000 ssh                                                    | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-642000 cache reload                                           | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
| ssh     | functional-642000 ssh                                                    | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 07 May 24 10:58 PDT | 07 May 24 10:58 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-642000 kubectl --                                             | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | --context functional-642000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-642000                                                     | functional-642000    | jenkins | v1.33.0 | 07 May 24 10:58 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/05/07 10:58:16
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0507 10:58:16.201933    9825 out.go:291] Setting OutFile to fd 1 ...
I0507 10:58:16.202045    9825 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:58:16.202047    9825 out.go:304] Setting ErrFile to fd 2...
I0507 10:58:16.202049    9825 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:58:16.202171    9825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
I0507 10:58:16.203169    9825 out.go:298] Setting JSON to false
I0507 10:58:16.218677    9825 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5267,"bootTime":1715099429,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0507 10:58:16.218745    9825 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0507 10:58:16.225463    9825 out.go:177] * [functional-642000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
I0507 10:58:16.233427    9825 out.go:177]   - MINIKUBE_LOCATION=18804
I0507 10:58:16.241440    9825 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
I0507 10:58:16.233481    9825 notify.go:220] Checking for updates...
I0507 10:58:16.248446    9825 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0507 10:58:16.252488    9825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0507 10:58:16.255438    9825 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
I0507 10:58:16.258463    9825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0507 10:58:16.262716    9825 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 10:58:16.262791    9825 driver.go:392] Setting default libvirt URI to qemu:///system
I0507 10:58:16.267572    9825 out.go:177] * Using the qemu2 driver based on existing profile
I0507 10:58:16.274448    9825 start.go:297] selected driver: qemu2
I0507 10:58:16.274452    9825 start.go:901] validating driver "qemu2" against &{Name:functional-642000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:functional-642000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0507 10:58:16.274503    9825 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0507 10:58:16.276884    9825 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0507 10:58:16.276908    9825 cni.go:84] Creating CNI manager for ""
I0507 10:58:16.276914    9825 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0507 10:58:16.276968    9825 start.go:340] cluster config:
{Name:functional-642000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-642000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0507 10:58:16.281591    9825 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0507 10:58:16.287452    9825 out.go:177] * Starting "functional-642000" primary control-plane node in "functional-642000" cluster
I0507 10:58:16.291293    9825 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0507 10:58:16.291305    9825 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
I0507 10:58:16.291310    9825 cache.go:56] Caching tarball of preloaded images
I0507 10:58:16.291368    9825 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0507 10:58:16.291372    9825 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0507 10:58:16.291434    9825 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/functional-642000/config.json ...
I0507 10:58:16.291840    9825 start.go:360] acquireMachinesLock for functional-642000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0507 10:58:16.291890    9825 start.go:364] duration metric: took 45.833µs to acquireMachinesLock for "functional-642000"
I0507 10:58:16.291899    9825 start.go:96] Skipping create...Using existing machine configuration
I0507 10:58:16.291904    9825 fix.go:54] fixHost starting: 
I0507 10:58:16.292038    9825 fix.go:112] recreateIfNeeded on functional-642000: state=Stopped err=<nil>
W0507 10:58:16.292044    9825 fix.go:138] unexpected machine state, will restart: <nil>
I0507 10:58:16.299439    9825 out.go:177] * Restarting existing qemu2 VM for "functional-642000" ...
I0507 10:58:16.304523    9825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:8c:c1:0a:bb:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/disk.qcow2
I0507 10:58:16.306725    9825 main.go:141] libmachine: STDOUT: 
I0507 10:58:16.306742    9825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0507 10:58:16.306773    9825 fix.go:56] duration metric: took 14.87025ms for fixHost
I0507 10:58:16.306775    9825 start.go:83] releasing machines lock for "functional-642000", held for 14.882583ms
W0507 10:58:16.306783    9825 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0507 10:58:16.306820    9825 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0507 10:58:16.306826    9825 start.go:728] Will try again in 5 seconds ...
I0507 10:58:21.308644    9825 start.go:360] acquireMachinesLock for functional-642000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0507 10:58:21.309040    9825 start.go:364] duration metric: took 316.458µs to acquireMachinesLock for "functional-642000"
I0507 10:58:21.309209    9825 start.go:96] Skipping create...Using existing machine configuration
I0507 10:58:21.309226    9825 fix.go:54] fixHost starting: 
I0507 10:58:21.309962    9825 fix.go:112] recreateIfNeeded on functional-642000: state=Stopped err=<nil>
W0507 10:58:21.309984    9825 fix.go:138] unexpected machine state, will restart: <nil>
I0507 10:58:21.320678    9825 out.go:177] * Restarting existing qemu2 VM for "functional-642000" ...
I0507 10:58:21.324739    9825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:8c:c1:0a:bb:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/functional-642000/disk.qcow2
I0507 10:58:21.334457    9825 main.go:141] libmachine: STDOUT: 
I0507 10:58:21.334523    9825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0507 10:58:21.334615    9825 fix.go:56] duration metric: took 25.387125ms for fixHost
I0507 10:58:21.334625    9825 start.go:83] releasing machines lock for "functional-642000", held for 25.570625ms
W0507 10:58:21.334913    9825 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-642000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0507 10:58:21.342685    9825 out.go:177] 
W0507 10:58:21.346647    9825 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0507 10:58:21.346671    9825 out.go:239] * 
W0507 10:58:21.349092    9825 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0507 10:58:21.357627    9825 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-642000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-642000 apply -f testdata/invalidsvc.yaml: exit status 1 (28.641625ms)

                                                
                                                
** stderr ** 
	error: context "functional-642000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-642000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-642000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-642000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-642000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-642000 --alsologtostderr -v=1] stderr:
I0507 10:59:08.367966   10198 out.go:291] Setting OutFile to fd 1 ...
I0507 10:59:08.368329   10198 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:59:08.368332   10198 out.go:304] Setting ErrFile to fd 2...
I0507 10:59:08.368335   10198 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:59:08.368501   10198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
I0507 10:59:08.368732   10198 mustload.go:65] Loading cluster: functional-642000
I0507 10:59:08.368929   10198 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 10:59:08.372036   10198 out.go:177] * The control-plane node functional-642000 host is not running: state=Stopped
I0507 10:59:08.375779   10198 out.go:177]   To start a cluster, run: "minikube start -p functional-642000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (42.157333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 status: exit status 7 (29.178ms)

                                                
                                                
-- stdout --
	functional-642000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-642000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (28.950084ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-642000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 status -o json: exit status 7 (29.056583ms)

                                                
                                                
-- stdout --
	{"Name":"functional-642000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-642000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (29.1775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-642000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-642000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.930333ms)

                                                
                                                
** stderr ** 
	error: context "functional-642000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-642000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-642000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-642000 describe po hello-node-connect: exit status 1 (26.851542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-642000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-642000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-642000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-642000 logs -l app=hello-node-connect: exit status 1 (26.633ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-642000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-642000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-642000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-642000 describe svc hello-node-connect: exit status 1 (26.0655ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-642000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-642000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (29.6965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-642000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (29.103375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "echo hello": exit status 83 (48.866667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-642000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-642000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-642000\"\n"*. args "out/minikube-darwin-arm64 -p functional-642000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "cat /etc/hostname": exit status 83 (46.987625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-642000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-642000"- but got *"* The control-plane node functional-642000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-642000\"\n"*. args "out/minikube-darwin-arm64 -p functional-642000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (29.323166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (53.125792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-642000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh -n functional-642000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh -n functional-642000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.924542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-642000 ssh -n functional-642000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-642000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-642000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 cp functional-642000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1593323958/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 cp functional-642000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1593323958/001/cp-test.txt: exit status 83 (44.711084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-642000 cp functional-642000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1593323958/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh -n functional-642000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh -n functional-642000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.897417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-642000 ssh -n functional-642000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1593323958/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-642000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-642000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (48.705042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-642000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh -n functional-642000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh -n functional-642000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (41.754125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-642000 ssh -n functional-642000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-642000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-642000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/9422/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /etc/test/nested/copy/9422/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /etc/test/nested/copy/9422/hosts": exit status 83 (39.816708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /etc/test/nested/copy/9422/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-642000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-642000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (29.41525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/9422.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /etc/ssl/certs/9422.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /etc/ssl/certs/9422.pem": exit status 83 (43.431791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/9422.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-642000 ssh \"sudo cat /etc/ssl/certs/9422.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/9422.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-642000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-642000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/9422.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /usr/share/ca-certificates/9422.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /usr/share/ca-certificates/9422.pem": exit status 83 (38.613791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/9422.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-642000 ssh \"sudo cat /usr/share/ca-certificates/9422.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/9422.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-642000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-642000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (39.730125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-642000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-642000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-642000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/94222.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /etc/ssl/certs/94222.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /etc/ssl/certs/94222.pem": exit status 83 (44.508791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/94222.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-642000 ssh \"sudo cat /etc/ssl/certs/94222.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/94222.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-642000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-642000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/94222.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /usr/share/ca-certificates/94222.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /usr/share/ca-certificates/94222.pem": exit status 83 (39.6225ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/94222.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-642000 ssh \"sudo cat /usr/share/ca-certificates/94222.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/94222.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-642000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-642000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (38.6235ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-642000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-642000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-642000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (29.540458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-642000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-642000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.10975ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-642000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-642000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-642000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-642000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-642000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-642000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-642000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-642000 -n functional-642000: exit status 7 (29.199834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-642000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "sudo systemctl is-active crio": exit status 83 (36.50875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-642000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-642000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 version -o=json --components: exit status 83 (40.89025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-642000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-642000 image ls --format short --alsologtostderr:
I0507 10:59:08.768494   10213 out.go:291] Setting OutFile to fd 1 ...
I0507 10:59:08.768645   10213 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:59:08.768648   10213 out.go:304] Setting ErrFile to fd 2...
I0507 10:59:08.768650   10213 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:59:08.768781   10213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
I0507 10:59:08.769191   10213 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 10:59:08.769255   10213 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-642000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-642000 image ls --format table --alsologtostderr:
I0507 10:59:08.981798   10225 out.go:291] Setting OutFile to fd 1 ...
I0507 10:59:08.981950   10225 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:59:08.981953   10225 out.go:304] Setting ErrFile to fd 2...
I0507 10:59:08.981955   10225 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:59:08.982083   10225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
I0507 10:59:08.982522   10225 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 10:59:08.982582   10225 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-642000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-642000 image ls --format json --alsologtostderr:
I0507 10:59:08.946708   10223 out.go:291] Setting OutFile to fd 1 ...
I0507 10:59:08.946864   10223 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:59:08.946867   10223 out.go:304] Setting ErrFile to fd 2...
I0507 10:59:08.946869   10223 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:59:08.946991   10223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
I0507 10:59:08.947409   10223 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 10:59:08.947472   10223 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-642000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-642000 image ls --format yaml --alsologtostderr:
I0507 10:59:08.913151   10221 out.go:291] Setting OutFile to fd 1 ...
I0507 10:59:08.913300   10221 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:59:08.913303   10221 out.go:304] Setting ErrFile to fd 2...
I0507 10:59:08.913306   10221 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:59:08.913419   10221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
I0507 10:59:08.913816   10221 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 10:59:08.913879   10221 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh pgrep buildkitd: exit status 83 (41.673833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image build -t localhost/my-image:functional-642000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-642000 image build -t localhost/my-image:functional-642000 testdata/build --alsologtostderr:
I0507 10:59:08.845909   10217 out.go:291] Setting OutFile to fd 1 ...
I0507 10:59:08.846337   10217 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:59:08.846341   10217 out.go:304] Setting ErrFile to fd 2...
I0507 10:59:08.846343   10217 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:59:08.846493   10217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
I0507 10:59:08.846891   10217 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 10:59:08.847358   10217 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 10:59:08.847594   10217 build_images.go:133] succeeded building to: 
I0507 10:59:08.847598   10217 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image ls
functional_test.go:442: expected "localhost/my-image:functional-642000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-642000 docker-env) && out/minikube-darwin-arm64 status -p functional-642000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-642000 docker-env) && out/minikube-darwin-arm64 status -p functional-642000": exit status 1 (43.683167ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 update-context --alsologtostderr -v=2: exit status 83 (40.751791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 10:59:08.643405   10207 out.go:291] Setting OutFile to fd 1 ...
	I0507 10:59:08.643844   10207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:59:08.643848   10207 out.go:304] Setting ErrFile to fd 2...
	I0507 10:59:08.643850   10207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:59:08.644051   10207 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 10:59:08.644262   10207 mustload.go:65] Loading cluster: functional-642000
	I0507 10:59:08.644455   10207 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 10:59:08.648158   10207 out.go:177] * The control-plane node functional-642000 host is not running: state=Stopped
	I0507 10:59:08.652235   10207 out.go:177]   To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-642000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-642000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-642000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 update-context --alsologtostderr -v=2: exit status 83 (41.583583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 10:59:08.684622   10209 out.go:291] Setting OutFile to fd 1 ...
	I0507 10:59:08.684764   10209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:59:08.684767   10209 out.go:304] Setting ErrFile to fd 2...
	I0507 10:59:08.684770   10209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:59:08.684910   10209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 10:59:08.685127   10209 mustload.go:65] Loading cluster: functional-642000
	I0507 10:59:08.685305   10209 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 10:59:08.690241   10209 out.go:177] * The control-plane node functional-642000 host is not running: state=Stopped
	I0507 10:59:08.694223   10209 out.go:177]   To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-642000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-642000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-642000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 update-context --alsologtostderr -v=2: exit status 83 (41.592584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 10:59:08.727041   10211 out.go:291] Setting OutFile to fd 1 ...
	I0507 10:59:08.727189   10211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:59:08.727192   10211 out.go:304] Setting ErrFile to fd 2...
	I0507 10:59:08.727194   10211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:59:08.727337   10211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 10:59:08.727557   10211 mustload.go:65] Loading cluster: functional-642000
	I0507 10:59:08.727764   10211 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 10:59:08.732267   10211 out.go:177] * The control-plane node functional-642000 host is not running: state=Stopped
	I0507 10:59:08.736206   10211 out.go:177]   To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-642000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-642000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-642000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-642000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-642000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.333667ms)

                                                
                                                
** stderr ** 
	error: context "functional-642000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-642000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 service list: exit status 83 (42.904958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-642000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-642000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-642000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 service list -o json: exit status 83 (45.506166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-642000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 service --namespace=default --https --url hello-node: exit status 83 (46.048959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-642000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 service hello-node --url --format={{.IP}}: exit status 83 (41.844ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-642000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-642000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-642000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 service hello-node --url: exit status 83 (40.894084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-642000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
functional_test.go:1565: failed to parse "* The control-plane node functional-642000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-642000\"": parse "* The control-plane node functional-642000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-642000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-642000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-642000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0507 10:58:23.299676    9945 out.go:291] Setting OutFile to fd 1 ...
I0507 10:58:23.299849    9945 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:58:23.299852    9945 out.go:304] Setting ErrFile to fd 2...
I0507 10:58:23.299855    9945 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 10:58:23.299987    9945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
I0507 10:58:23.300246    9945 mustload.go:65] Loading cluster: functional-642000
I0507 10:58:23.300461    9945 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 10:58:23.304917    9945 out.go:177] * The control-plane node functional-642000 host is not running: state=Stopped
I0507 10:58:23.319484    9945 out.go:177]   To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
stdout: * The control-plane node functional-642000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-642000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-642000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 9946: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-642000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-642000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-642000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-642000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-642000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-642000": client config: context "functional-642000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (87.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-642000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-642000 get svc nginx-svc: exit status 1 (68.460709ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-642000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-642000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (87.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image load --daemon gcr.io/google-containers/addon-resizer:functional-642000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-642000 image load --daemon gcr.io/google-containers/addon-resizer:functional-642000 --alsologtostderr: (1.317423875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-642000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image load --daemon gcr.io/google-containers/addon-resizer:functional-642000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-642000 image load --daemon gcr.io/google-containers/addon-resizer:functional-642000 --alsologtostderr: (1.298901s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-642000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.2148005s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-642000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image load --daemon gcr.io/google-containers/addon-resizer:functional-642000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-642000 image load --daemon gcr.io/google-containers/addon-resizer:functional-642000 --alsologtostderr: (1.15137925s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-642000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image save gcr.io/google-containers/addon-resizer:functional-642000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-642000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.035519541s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (34.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (34.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-492000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-492000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.702080125s)

                                                
                                                
-- stdout --
	* [ha-492000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-492000" primary control-plane node in "ha-492000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-492000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:00:50.237251   10338 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:00:50.237386   10338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:00:50.237390   10338 out.go:304] Setting ErrFile to fd 2...
	I0507 11:00:50.237392   10338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:00:50.237514   10338 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:00:50.238591   10338 out.go:298] Setting JSON to false
	I0507 11:00:50.254491   10338 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5421,"bootTime":1715099429,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:00:50.254546   10338 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:00:50.258636   10338 out.go:177] * [ha-492000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:00:50.266509   10338 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:00:50.269466   10338 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:00:50.266566   10338 notify.go:220] Checking for updates...
	I0507 11:00:50.272490   10338 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:00:50.275486   10338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:00:50.276957   10338 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:00:50.280469   10338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:00:50.283654   10338 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:00:50.287334   10338 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:00:50.294543   10338 start.go:297] selected driver: qemu2
	I0507 11:00:50.294550   10338 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:00:50.294556   10338 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:00:50.296652   10338 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:00:50.299494   10338 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:00:50.302532   10338 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:00:50.302550   10338 cni.go:84] Creating CNI manager for ""
	I0507 11:00:50.302554   10338 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0507 11:00:50.302558   10338 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0507 11:00:50.302598   10338 start.go:340] cluster config:
	{Name:ha-492000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:00:50.307158   10338 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:00:50.313479   10338 out.go:177] * Starting "ha-492000" primary control-plane node in "ha-492000" cluster
	I0507 11:00:50.317479   10338 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:00:50.317497   10338 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:00:50.317502   10338 cache.go:56] Caching tarball of preloaded images
	I0507 11:00:50.317568   10338 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:00:50.317575   10338 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:00:50.317821   10338 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/ha-492000/config.json ...
	I0507 11:00:50.317832   10338 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/ha-492000/config.json: {Name:mkf5a4515b681bcf1f36b965411193b98761b82b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:00:50.318180   10338 start.go:360] acquireMachinesLock for ha-492000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:00:50.318215   10338 start.go:364] duration metric: took 29.584µs to acquireMachinesLock for "ha-492000"
	I0507 11:00:50.318226   10338 start.go:93] Provisioning new machine with config: &{Name:ha-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.0 ClusterName:ha-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:00:50.318256   10338 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:00:50.325467   10338 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:00:50.342512   10338 start.go:159] libmachine.API.Create for "ha-492000" (driver="qemu2")
	I0507 11:00:50.342530   10338 client.go:168] LocalClient.Create starting
	I0507 11:00:50.342586   10338 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:00:50.342613   10338 main.go:141] libmachine: Decoding PEM data...
	I0507 11:00:50.342623   10338 main.go:141] libmachine: Parsing certificate...
	I0507 11:00:50.342659   10338 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:00:50.342682   10338 main.go:141] libmachine: Decoding PEM data...
	I0507 11:00:50.342690   10338 main.go:141] libmachine: Parsing certificate...
	I0507 11:00:50.343005   10338 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:00:50.464204   10338 main.go:141] libmachine: Creating SSH key...
	I0507 11:00:50.537019   10338 main.go:141] libmachine: Creating Disk image...
	I0507 11:00:50.537025   10338 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:00:50.537188   10338 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2
	I0507 11:00:50.549817   10338 main.go:141] libmachine: STDOUT: 
	I0507 11:00:50.549844   10338 main.go:141] libmachine: STDERR: 
	I0507 11:00:50.549890   10338 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2 +20000M
	I0507 11:00:50.560697   10338 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:00:50.560723   10338 main.go:141] libmachine: STDERR: 
	I0507 11:00:50.560735   10338 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2
	I0507 11:00:50.560739   10338 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:00:50.560777   10338 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:bd:65:9e:6a:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2
	I0507 11:00:50.562476   10338 main.go:141] libmachine: STDOUT: 
	I0507 11:00:50.562499   10338 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:00:50.562518   10338 client.go:171] duration metric: took 219.990459ms to LocalClient.Create
	I0507 11:00:52.564676   10338 start.go:128] duration metric: took 2.246463958s to createHost
	I0507 11:00:52.564773   10338 start.go:83] releasing machines lock for "ha-492000", held for 2.246626875s
	W0507 11:00:52.564826   10338 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:00:52.576335   10338 out.go:177] * Deleting "ha-492000" in qemu2 ...
	W0507 11:00:52.598842   10338 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:00:52.598880   10338 start.go:728] Will try again in 5 seconds ...
	I0507 11:00:57.600917   10338 start.go:360] acquireMachinesLock for ha-492000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:00:57.601435   10338 start.go:364] duration metric: took 368.625µs to acquireMachinesLock for "ha-492000"
	I0507 11:00:57.601623   10338 start.go:93] Provisioning new machine with config: &{Name:ha-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.0 ClusterName:ha-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:00:57.601938   10338 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:00:57.610586   10338 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:00:57.659510   10338 start.go:159] libmachine.API.Create for "ha-492000" (driver="qemu2")
	I0507 11:00:57.659562   10338 client.go:168] LocalClient.Create starting
	I0507 11:00:57.659673   10338 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:00:57.659742   10338 main.go:141] libmachine: Decoding PEM data...
	I0507 11:00:57.659763   10338 main.go:141] libmachine: Parsing certificate...
	I0507 11:00:57.659831   10338 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:00:57.659875   10338 main.go:141] libmachine: Decoding PEM data...
	I0507 11:00:57.659887   10338 main.go:141] libmachine: Parsing certificate...
	I0507 11:00:57.660485   10338 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:00:57.791155   10338 main.go:141] libmachine: Creating SSH key...
	I0507 11:00:57.844840   10338 main.go:141] libmachine: Creating Disk image...
	I0507 11:00:57.844846   10338 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:00:57.845007   10338 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2
	I0507 11:00:57.857446   10338 main.go:141] libmachine: STDOUT: 
	I0507 11:00:57.857473   10338 main.go:141] libmachine: STDERR: 
	I0507 11:00:57.857531   10338 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2 +20000M
	I0507 11:00:57.868340   10338 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:00:57.868363   10338 main.go:141] libmachine: STDERR: 
	I0507 11:00:57.868374   10338 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2
	I0507 11:00:57.868377   10338 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:00:57.868406   10338 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:28:40:95:0b:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2
	I0507 11:00:57.870028   10338 main.go:141] libmachine: STDOUT: 
	I0507 11:00:57.870052   10338 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:00:57.870066   10338 client.go:171] duration metric: took 210.502625ms to LocalClient.Create
	I0507 11:00:59.872226   10338 start.go:128] duration metric: took 2.270328042s to createHost
	I0507 11:00:59.872380   10338 start.go:83] releasing machines lock for "ha-492000", held for 2.270913834s
	W0507 11:00:59.872684   10338 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-492000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-492000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:00:59.882335   10338 out.go:177] 
	W0507 11:00:59.887246   10338 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:00:59.887283   10338 out.go:239] * 
	* 
	W0507 11:00:59.889889   10338 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:00:59.897353   10338 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-492000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (66.169208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (115.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (58.443875ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-492000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- rollout status deployment/busybox: exit status 1 (56.485583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.476708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.594583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.893916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.851417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.087834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.805708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.175667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.896ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.364ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.662666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.12925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.910583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.686791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.556458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.521708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (28.931792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (115.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-492000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.834208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-492000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (29.022167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-492000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-492000 -v=7 --alsologtostderr: exit status 83 (39.8925ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-492000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-492000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:02:55.334619   10497 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:02:55.335179   10497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:55.335182   10497 out.go:304] Setting ErrFile to fd 2...
	I0507 11:02:55.335184   10497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:55.335354   10497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:02:55.335581   10497 mustload.go:65] Loading cluster: ha-492000
	I0507 11:02:55.335767   10497 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:02:55.340182   10497 out.go:177] * The control-plane node ha-492000 host is not running: state=Stopped
	I0507 11:02:55.343056   10497 out.go:177]   To start a cluster, run: "minikube start -p ha-492000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-492000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (29.00975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-492000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-492000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (25.807208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-492000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-492000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-492000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (29.166584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-492000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-492000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-492000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-492000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-492000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-492000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-492000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-492000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (29.068958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 status --output json -v=7 --alsologtostderr: exit status 7 (29.268083ms)

                                                
                                                
-- stdout --
	{"Name":"ha-492000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:02:55.557307   10510 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:02:55.557458   10510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:55.557461   10510 out.go:304] Setting ErrFile to fd 2...
	I0507 11:02:55.557463   10510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:55.557593   10510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:02:55.557716   10510 out.go:298] Setting JSON to true
	I0507 11:02:55.557726   10510 mustload.go:65] Loading cluster: ha-492000
	I0507 11:02:55.557792   10510 notify.go:220] Checking for updates...
	I0507 11:02:55.557907   10510 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:02:55.557917   10510 status.go:255] checking status of ha-492000 ...
	I0507 11:02:55.558128   10510 status.go:330] ha-492000 host status = "Stopped" (err=<nil>)
	I0507 11:02:55.558132   10510 status.go:343] host is not running, skipping remaining checks
	I0507 11:02:55.558134   10510 status.go:257] ha-492000 status: &{Name:ha-492000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-492000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (28.973584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 node stop m02 -v=7 --alsologtostderr: exit status 85 (47.606958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:02:55.615881   10514 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:02:55.616442   10514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:55.616446   10514 out.go:304] Setting ErrFile to fd 2...
	I0507 11:02:55.616449   10514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:55.616614   10514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:02:55.616860   10514 mustload.go:65] Loading cluster: ha-492000
	I0507 11:02:55.617065   10514 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:02:55.621441   10514 out.go:177] 
	W0507 11:02:55.624443   10514 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0507 11:02:55.624448   10514 out.go:239] * 
	* 
	W0507 11:02:55.626415   10514 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:02:55.631367   10514 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-492000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr: exit status 7 (28.88775ms)

                                                
                                                
-- stdout --
	ha-492000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:02:55.663570   10516 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:02:55.663727   10516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:55.663730   10516 out.go:304] Setting ErrFile to fd 2...
	I0507 11:02:55.663732   10516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:55.663857   10516 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:02:55.663968   10516 out.go:298] Setting JSON to false
	I0507 11:02:55.663978   10516 mustload.go:65] Loading cluster: ha-492000
	I0507 11:02:55.664040   10516 notify.go:220] Checking for updates...
	I0507 11:02:55.664182   10516 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:02:55.664188   10516 status.go:255] checking status of ha-492000 ...
	I0507 11:02:55.664397   10516 status.go:330] ha-492000 host status = "Stopped" (err=<nil>)
	I0507 11:02:55.664401   10516 status.go:343] host is not running, skipping remaining checks
	I0507 11:02:55.664403   10516 status.go:257] ha-492000 status: &{Name:ha-492000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr": ha-492000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr": ha-492000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr": ha-492000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr": ha-492000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (29.002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-492000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-492000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-492000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-492000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (28.822917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (39.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 node start m02 -v=7 --alsologtostderr: exit status 85 (46.54975ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:02:55.819401   10526 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:02:55.819930   10526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:55.819934   10526 out.go:304] Setting ErrFile to fd 2...
	I0507 11:02:55.819936   10526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:55.820091   10526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:02:55.820308   10526 mustload.go:65] Loading cluster: ha-492000
	I0507 11:02:55.820478   10526 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:02:55.824070   10526 out.go:177] 
	W0507 11:02:55.828029   10526 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0507 11:02:55.828034   10526 out.go:239] * 
	* 
	W0507 11:02:55.829898   10526 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:02:55.833956   10526 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0507 11:02:55.819401   10526 out.go:291] Setting OutFile to fd 1 ...
I0507 11:02:55.819930   10526 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 11:02:55.819934   10526 out.go:304] Setting ErrFile to fd 2...
I0507 11:02:55.819936   10526 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 11:02:55.820091   10526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
I0507 11:02:55.820308   10526 mustload.go:65] Loading cluster: ha-492000
I0507 11:02:55.820478   10526 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 11:02:55.824070   10526 out.go:177] 
W0507 11:02:55.828029   10526 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0507 11:02:55.828034   10526 out.go:239] * 
* 
W0507 11:02:55.829898   10526 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0507 11:02:55.833956   10526 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-492000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr: exit status 7 (29.1185ms)

                                                
                                                
-- stdout --
	ha-492000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:02:55.866410   10528 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:02:55.866546   10528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:55.866549   10528 out.go:304] Setting ErrFile to fd 2...
	I0507 11:02:55.866552   10528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:55.866677   10528 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:02:55.866794   10528 out.go:298] Setting JSON to false
	I0507 11:02:55.866809   10528 mustload.go:65] Loading cluster: ha-492000
	I0507 11:02:55.866866   10528 notify.go:220] Checking for updates...
	I0507 11:02:55.867009   10528 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:02:55.867014   10528 status.go:255] checking status of ha-492000 ...
	I0507 11:02:55.867231   10528 status.go:330] ha-492000 host status = "Stopped" (err=<nil>)
	I0507 11:02:55.867234   10528 status.go:343] host is not running, skipping remaining checks
	I0507 11:02:55.867236   10528 status.go:257] ha-492000 status: &{Name:ha-492000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr: exit status 7 (74.621ms)

                                                
                                                
-- stdout --
	ha-492000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:02:57.368844   10530 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:02:57.369028   10530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:57.369032   10530 out.go:304] Setting ErrFile to fd 2...
	I0507 11:02:57.369035   10530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:57.369219   10530 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:02:57.369361   10530 out.go:298] Setting JSON to false
	I0507 11:02:57.369374   10530 mustload.go:65] Loading cluster: ha-492000
	I0507 11:02:57.369415   10530 notify.go:220] Checking for updates...
	I0507 11:02:57.369618   10530 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:02:57.369625   10530 status.go:255] checking status of ha-492000 ...
	I0507 11:02:57.369899   10530 status.go:330] ha-492000 host status = "Stopped" (err=<nil>)
	I0507 11:02:57.369904   10530 status.go:343] host is not running, skipping remaining checks
	I0507 11:02:57.369907   10530 status.go:257] ha-492000 status: &{Name:ha-492000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr: exit status 7 (70.489209ms)

                                                
                                                
-- stdout --
	ha-492000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:02:58.557618   10534 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:02:58.557807   10534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:58.557812   10534 out.go:304] Setting ErrFile to fd 2...
	I0507 11:02:58.557815   10534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:02:58.557987   10534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:02:58.558154   10534 out.go:298] Setting JSON to false
	I0507 11:02:58.558168   10534 mustload.go:65] Loading cluster: ha-492000
	I0507 11:02:58.558216   10534 notify.go:220] Checking for updates...
	I0507 11:02:58.558454   10534 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:02:58.558461   10534 status.go:255] checking status of ha-492000 ...
	I0507 11:02:58.558791   10534 status.go:330] ha-492000 host status = "Stopped" (err=<nil>)
	I0507 11:02:58.558796   10534 status.go:343] host is not running, skipping remaining checks
	I0507 11:02:58.558800   10534 status.go:257] ha-492000 status: &{Name:ha-492000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr: exit status 7 (72.786791ms)

                                                
                                                
-- stdout --
	ha-492000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:03:00.076792   10538 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:03:00.077052   10538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:00.077057   10538 out.go:304] Setting ErrFile to fd 2...
	I0507 11:03:00.077060   10538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:00.077258   10538 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:03:00.077461   10538 out.go:298] Setting JSON to false
	I0507 11:03:00.077478   10538 mustload.go:65] Loading cluster: ha-492000
	I0507 11:03:00.077521   10538 notify.go:220] Checking for updates...
	I0507 11:03:00.077827   10538 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:03:00.077838   10538 status.go:255] checking status of ha-492000 ...
	I0507 11:03:00.078170   10538 status.go:330] ha-492000 host status = "Stopped" (err=<nil>)
	I0507 11:03:00.078176   10538 status.go:343] host is not running, skipping remaining checks
	I0507 11:03:00.078179   10538 status.go:257] ha-492000 status: &{Name:ha-492000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr: exit status 7 (71.743125ms)

                                                
                                                
-- stdout --
	ha-492000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:03:03.308914   10546 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:03:03.309102   10546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:03.309107   10546 out.go:304] Setting ErrFile to fd 2...
	I0507 11:03:03.309109   10546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:03.309291   10546 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:03:03.309462   10546 out.go:298] Setting JSON to false
	I0507 11:03:03.309475   10546 mustload.go:65] Loading cluster: ha-492000
	I0507 11:03:03.309518   10546 notify.go:220] Checking for updates...
	I0507 11:03:03.309716   10546 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:03:03.309723   10546 status.go:255] checking status of ha-492000 ...
	I0507 11:03:03.310006   10546 status.go:330] ha-492000 host status = "Stopped" (err=<nil>)
	I0507 11:03:03.310011   10546 status.go:343] host is not running, skipping remaining checks
	I0507 11:03:03.310014   10546 status.go:257] ha-492000 status: &{Name:ha-492000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr: exit status 7 (71.257333ms)

                                                
                                                
-- stdout --
	ha-492000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:03:09.046718   10552 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:03:09.046915   10552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:09.046920   10552 out.go:304] Setting ErrFile to fd 2...
	I0507 11:03:09.046923   10552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:09.047080   10552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:03:09.047222   10552 out.go:298] Setting JSON to false
	I0507 11:03:09.047236   10552 mustload.go:65] Loading cluster: ha-492000
	I0507 11:03:09.047284   10552 notify.go:220] Checking for updates...
	I0507 11:03:09.047485   10552 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:03:09.047491   10552 status.go:255] checking status of ha-492000 ...
	I0507 11:03:09.047762   10552 status.go:330] ha-492000 host status = "Stopped" (err=<nil>)
	I0507 11:03:09.047767   10552 status.go:343] host is not running, skipping remaining checks
	I0507 11:03:09.047770   10552 status.go:257] ha-492000 status: &{Name:ha-492000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr: exit status 7 (72.9805ms)

                                                
                                                
-- stdout --
	ha-492000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:03:13.698645   10556 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:03:13.698837   10556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:13.698842   10556 out.go:304] Setting ErrFile to fd 2...
	I0507 11:03:13.698845   10556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:13.699016   10556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:03:13.699187   10556 out.go:298] Setting JSON to false
	I0507 11:03:13.699200   10556 mustload.go:65] Loading cluster: ha-492000
	I0507 11:03:13.699242   10556 notify.go:220] Checking for updates...
	I0507 11:03:13.699470   10556 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:03:13.699479   10556 status.go:255] checking status of ha-492000 ...
	I0507 11:03:13.699753   10556 status.go:330] ha-492000 host status = "Stopped" (err=<nil>)
	I0507 11:03:13.699758   10556 status.go:343] host is not running, skipping remaining checks
	I0507 11:03:13.699761   10556 status.go:257] ha-492000 status: &{Name:ha-492000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr: exit status 7 (74.9765ms)

                                                
                                                
-- stdout --
	ha-492000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:03:20.774748   10560 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:03:20.774962   10560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:20.774966   10560 out.go:304] Setting ErrFile to fd 2...
	I0507 11:03:20.774969   10560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:20.775137   10560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:03:20.775305   10560 out.go:298] Setting JSON to false
	I0507 11:03:20.775319   10560 mustload.go:65] Loading cluster: ha-492000
	I0507 11:03:20.775362   10560 notify.go:220] Checking for updates...
	I0507 11:03:20.775570   10560 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:03:20.775578   10560 status.go:255] checking status of ha-492000 ...
	I0507 11:03:20.775844   10560 status.go:330] ha-492000 host status = "Stopped" (err=<nil>)
	I0507 11:03:20.775849   10560 status.go:343] host is not running, skipping remaining checks
	I0507 11:03:20.775851   10560 status.go:257] ha-492000 status: &{Name:ha-492000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr: exit status 7 (72.222916ms)

                                                
                                                
-- stdout --
	ha-492000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:03:35.706555   10568 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:03:35.706723   10568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:35.706731   10568 out.go:304] Setting ErrFile to fd 2...
	I0507 11:03:35.706734   10568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:35.706908   10568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:03:35.707060   10568 out.go:298] Setting JSON to false
	I0507 11:03:35.707073   10568 mustload.go:65] Loading cluster: ha-492000
	I0507 11:03:35.707112   10568 notify.go:220] Checking for updates...
	I0507 11:03:35.707330   10568 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:03:35.707349   10568 status.go:255] checking status of ha-492000 ...
	I0507 11:03:35.707622   10568 status.go:330] ha-492000 host status = "Stopped" (err=<nil>)
	I0507 11:03:35.707627   10568 status.go:343] host is not running, skipping remaining checks
	I0507 11:03:35.707630   10568 status.go:257] ha-492000 status: &{Name:ha-492000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (32.715667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (39.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-492000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-492000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-492000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-492000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-492000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-492000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-492000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-492000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (29.382292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-492000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-492000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-492000 -v=7 --alsologtostderr: (3.353394875s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-492000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-492000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.230964583s)

                                                
                                                
-- stdout --
	* [ha-492000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-492000" primary control-plane node in "ha-492000" cluster
	* Restarting existing qemu2 VM for "ha-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:03:39.287563   10598 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:03:39.287739   10598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:39.287744   10598 out.go:304] Setting ErrFile to fd 2...
	I0507 11:03:39.287746   10598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:39.287912   10598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:03:39.290148   10598 out.go:298] Setting JSON to false
	I0507 11:03:39.309306   10598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5590,"bootTime":1715099429,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:03:39.309367   10598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:03:39.314272   10598 out.go:177] * [ha-492000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:03:39.325094   10598 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:03:39.321236   10598 notify.go:220] Checking for updates...
	I0507 11:03:39.333091   10598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:03:39.337162   10598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:03:39.340046   10598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:03:39.343087   10598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:03:39.347144   10598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:03:39.351357   10598 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:03:39.351443   10598 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:03:39.356094   10598 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 11:03:39.363045   10598 start.go:297] selected driver: qemu2
	I0507 11:03:39.363052   10598 start.go:901] validating driver "qemu2" against &{Name:ha-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.0 ClusterName:ha-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:03:39.363128   10598 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:03:39.365641   10598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:03:39.365696   10598 cni.go:84] Creating CNI manager for ""
	I0507 11:03:39.365700   10598 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0507 11:03:39.365743   10598 start.go:340] cluster config:
	{Name:ha-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-492000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:03:39.370114   10598 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:03:39.378932   10598 out.go:177] * Starting "ha-492000" primary control-plane node in "ha-492000" cluster
	I0507 11:03:39.383036   10598 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:03:39.383053   10598 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:03:39.383063   10598 cache.go:56] Caching tarball of preloaded images
	I0507 11:03:39.383116   10598 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:03:39.383121   10598 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:03:39.383176   10598 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/ha-492000/config.json ...
	I0507 11:03:39.383617   10598 start.go:360] acquireMachinesLock for ha-492000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:03:39.383656   10598 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "ha-492000"
	I0507 11:03:39.383669   10598 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:03:39.383675   10598 fix.go:54] fixHost starting: 
	I0507 11:03:39.383802   10598 fix.go:112] recreateIfNeeded on ha-492000: state=Stopped err=<nil>
	W0507 11:03:39.383811   10598 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:03:39.385493   10598 out.go:177] * Restarting existing qemu2 VM for "ha-492000" ...
	I0507 11:03:39.393114   10598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:28:40:95:0b:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2
	I0507 11:03:39.395294   10598 main.go:141] libmachine: STDOUT: 
	I0507 11:03:39.395321   10598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:03:39.395350   10598 fix.go:56] duration metric: took 11.67425ms for fixHost
	I0507 11:03:39.395355   10598 start.go:83] releasing machines lock for "ha-492000", held for 11.691833ms
	W0507 11:03:39.395361   10598 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:03:39.395408   10598 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:03:39.395412   10598 start.go:728] Will try again in 5 seconds ...
	I0507 11:03:44.397160   10598 start.go:360] acquireMachinesLock for ha-492000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:03:44.397688   10598 start.go:364] duration metric: took 390.5µs to acquireMachinesLock for "ha-492000"
	I0507 11:03:44.397818   10598 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:03:44.397839   10598 fix.go:54] fixHost starting: 
	I0507 11:03:44.398605   10598 fix.go:112] recreateIfNeeded on ha-492000: state=Stopped err=<nil>
	W0507 11:03:44.398631   10598 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:03:44.406124   10598 out.go:177] * Restarting existing qemu2 VM for "ha-492000" ...
	I0507 11:03:44.410267   10598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:28:40:95:0b:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2
	I0507 11:03:44.420296   10598 main.go:141] libmachine: STDOUT: 
	I0507 11:03:44.420354   10598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:03:44.420437   10598 fix.go:56] duration metric: took 22.601458ms for fixHost
	I0507 11:03:44.420455   10598 start.go:83] releasing machines lock for "ha-492000", held for 22.744875ms
	W0507 11:03:44.420595   10598 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-492000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-492000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:03:44.428035   10598 out.go:177] 
	W0507 11:03:44.435779   10598 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:03:44.435827   10598 out.go:239] * 
	* 
	W0507 11:03:44.438498   10598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:03:44.444079   10598 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-492000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-492000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (32.199959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.025917ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-492000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-492000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:03:44.588486   10616 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:03:44.588884   10616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:44.588887   10616 out.go:304] Setting ErrFile to fd 2...
	I0507 11:03:44.588890   10616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:44.589045   10616 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:03:44.589282   10616 mustload.go:65] Loading cluster: ha-492000
	I0507 11:03:44.589470   10616 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:03:44.593395   10616 out.go:177] * The control-plane node ha-492000 host is not running: state=Stopped
	I0507 11:03:44.596404   10616 out.go:177]   To start a cluster, run: "minikube start -p ha-492000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-492000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr: exit status 7 (30.54475ms)

                                                
                                                
-- stdout --
	ha-492000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:03:44.628977   10618 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:03:44.629153   10618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:44.629161   10618 out.go:304] Setting ErrFile to fd 2...
	I0507 11:03:44.629163   10618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:44.629306   10618 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:03:44.629440   10618 out.go:298] Setting JSON to false
	I0507 11:03:44.629454   10618 mustload.go:65] Loading cluster: ha-492000
	I0507 11:03:44.629491   10618 notify.go:220] Checking for updates...
	I0507 11:03:44.629671   10618 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:03:44.629676   10618 status.go:255] checking status of ha-492000 ...
	I0507 11:03:44.629901   10618 status.go:330] ha-492000 host status = "Stopped" (err=<nil>)
	I0507 11:03:44.629904   10618 status.go:343] host is not running, skipping remaining checks
	I0507 11:03:44.629907   10618 status.go:257] ha-492000 status: &{Name:ha-492000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (30.839208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-492000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-492000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-492000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-492000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (29.77525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-492000 stop -v=7 --alsologtostderr: (3.239486292s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr: exit status 7 (69.231417ms)

                                                
                                                
-- stdout --
	ha-492000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:03:48.067624   10650 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:03:48.067852   10650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:48.067857   10650 out.go:304] Setting ErrFile to fd 2...
	I0507 11:03:48.067861   10650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:48.068062   10650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:03:48.068256   10650 out.go:298] Setting JSON to false
	I0507 11:03:48.068273   10650 mustload.go:65] Loading cluster: ha-492000
	I0507 11:03:48.068311   10650 notify.go:220] Checking for updates...
	I0507 11:03:48.068555   10650 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:03:48.068563   10650 status.go:255] checking status of ha-492000 ...
	I0507 11:03:48.068856   10650 status.go:330] ha-492000 host status = "Stopped" (err=<nil>)
	I0507 11:03:48.068861   10650 status.go:343] host is not running, skipping remaining checks
	I0507 11:03:48.068864   10650 status.go:257] ha-492000 status: &{Name:ha-492000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr": ha-492000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr": ha-492000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-492000 status -v=7 --alsologtostderr": ha-492000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (32.338541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-492000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-492000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.181748542s)

                                                
                                                
-- stdout --
	* [ha-492000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-492000" primary control-plane node in "ha-492000" cluster
	* Restarting existing qemu2 VM for "ha-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:03:48.129099   10654 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:03:48.129219   10654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:48.129222   10654 out.go:304] Setting ErrFile to fd 2...
	I0507 11:03:48.129225   10654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:48.129363   10654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:03:48.130362   10654 out.go:298] Setting JSON to false
	I0507 11:03:48.146264   10654 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5599,"bootTime":1715099429,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:03:48.146330   10654 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:03:48.149850   10654 out.go:177] * [ha-492000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:03:48.157884   10654 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:03:48.157936   10654 notify.go:220] Checking for updates...
	I0507 11:03:48.163775   10654 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:03:48.166874   10654 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:03:48.169790   10654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:03:48.172830   10654 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:03:48.175825   10654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:03:48.179161   10654 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:03:48.179445   10654 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:03:48.183831   10654 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 11:03:48.188773   10654 start.go:297] selected driver: qemu2
	I0507 11:03:48.188785   10654 start.go:901] validating driver "qemu2" against &{Name:ha-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.0 ClusterName:ha-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:03:48.188829   10654 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:03:48.191025   10654 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:03:48.191051   10654 cni.go:84] Creating CNI manager for ""
	I0507 11:03:48.191055   10654 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0507 11:03:48.191093   10654 start.go:340] cluster config:
	{Name:ha-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-492000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:03:48.195265   10654 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:03:48.202765   10654 out.go:177] * Starting "ha-492000" primary control-plane node in "ha-492000" cluster
	I0507 11:03:48.206839   10654 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:03:48.206856   10654 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:03:48.206866   10654 cache.go:56] Caching tarball of preloaded images
	I0507 11:03:48.206928   10654 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:03:48.206935   10654 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:03:48.207003   10654 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/ha-492000/config.json ...
	I0507 11:03:48.207439   10654 start.go:360] acquireMachinesLock for ha-492000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:03:48.207481   10654 start.go:364] duration metric: took 35.542µs to acquireMachinesLock for "ha-492000"
	I0507 11:03:48.207491   10654 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:03:48.207497   10654 fix.go:54] fixHost starting: 
	I0507 11:03:48.207615   10654 fix.go:112] recreateIfNeeded on ha-492000: state=Stopped err=<nil>
	W0507 11:03:48.207624   10654 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:03:48.215841   10654 out.go:177] * Restarting existing qemu2 VM for "ha-492000" ...
	I0507 11:03:48.219873   10654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:28:40:95:0b:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2
	I0507 11:03:48.221927   10654 main.go:141] libmachine: STDOUT: 
	I0507 11:03:48.221948   10654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:03:48.221980   10654 fix.go:56] duration metric: took 14.483458ms for fixHost
	I0507 11:03:48.221985   10654 start.go:83] releasing machines lock for "ha-492000", held for 14.499417ms
	W0507 11:03:48.221992   10654 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:03:48.222032   10654 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:03:48.222037   10654 start.go:728] Will try again in 5 seconds ...
	I0507 11:03:53.224076   10654 start.go:360] acquireMachinesLock for ha-492000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:03:53.224529   10654 start.go:364] duration metric: took 343.667µs to acquireMachinesLock for "ha-492000"
	I0507 11:03:53.224689   10654 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:03:53.224710   10654 fix.go:54] fixHost starting: 
	I0507 11:03:53.225534   10654 fix.go:112] recreateIfNeeded on ha-492000: state=Stopped err=<nil>
	W0507 11:03:53.225561   10654 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:03:53.229937   10654 out.go:177] * Restarting existing qemu2 VM for "ha-492000" ...
	I0507 11:03:53.238293   10654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:28:40:95:0b:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/ha-492000/disk.qcow2
	I0507 11:03:53.247854   10654 main.go:141] libmachine: STDOUT: 
	I0507 11:03:53.247920   10654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:03:53.248029   10654 fix.go:56] duration metric: took 23.322583ms for fixHost
	I0507 11:03:53.248045   10654 start.go:83] releasing machines lock for "ha-492000", held for 23.492334ms
	W0507 11:03:53.248203   10654 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-492000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-492000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:03:53.256069   10654 out.go:177] 
	W0507 11:03:53.259994   10654 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:03:53.260026   10654 out.go:239] * 
	* 
	W0507 11:03:53.262471   10654 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:03:53.271032   10654 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-492000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (67.8665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-492000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-492000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-492000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-492000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (29.304042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-492000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-492000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.064791ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-492000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-492000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:03:53.482312   10674 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:03:53.482472   10674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:53.482475   10674 out.go:304] Setting ErrFile to fd 2...
	I0507 11:03:53.482477   10674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:03:53.482600   10674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:03:53.482824   10674 mustload.go:65] Loading cluster: ha-492000
	I0507 11:03:53.483011   10674 config.go:182] Loaded profile config "ha-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:03:53.487568   10674 out.go:177] * The control-plane node ha-492000 host is not running: state=Stopped
	I0507 11:03:53.491592   10674 out.go:177]   To start a cluster, run: "minikube start -p ha-492000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-492000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (28.806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-492000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-492000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-492000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-492000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-492000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-492000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-492000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-492000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-492000 -n ha-492000: exit status 7 (28.950125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-297000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-297000 --driver=qemu2 : exit status 80 (9.735333208s)

                                                
                                                
-- stdout --
	* [image-297000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-297000" primary control-plane node in "image-297000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-297000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-297000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-297000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-297000 -n image-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-297000 -n image-297000: exit status 7 (68.14575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-297000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-845000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-845000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.852284292s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cb913db8-1032-4f8d-9389-cfbab9434a7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-845000] minikube v1.33.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"918648f4-9af4-4c9c-bcdb-fcb627aab211","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18804"}}
	{"specversion":"1.0","id":"63b199e4-430c-40c4-bde7-571d1e8d613d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig"}}
	{"specversion":"1.0","id":"331fefa8-3624-44b1-a208-4771b01d513b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"de406ac2-3fee-4166-bf74-821fea493cbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9b695535-906a-41b8-8a59-d40f0448f834","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube"}}
	{"specversion":"1.0","id":"99e8da44-fe41-45a7-8d88-e9901a71ebca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f27b8ec8-b39f-4c42-b2a4-eb11663e5903","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"033827c6-2dc6-448f-852b-2684fbbfc751","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"daa01853-eaa3-4e20-8630-38819a213569","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-845000\" primary control-plane node in \"json-output-845000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"10f5b115-d0f2-480e-89af-073ba3e5ac52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"f3eaee55-235e-4587-a6fe-1157a2e4000c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-845000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"535f6b1c-1ad8-4125-9eef-54117bfaebb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"85ef796d-6b80-4a29-8dd5-12ec12eb143a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"927dd557-05aa-4449-8f6c-64258441d2dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-845000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"81189d4e-fe54-4192-b7f3-e51f309677b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"d6af5cc5-6a93-4b31-beec-e7e93e1adfbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-845000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-845000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-845000 --output=json --user=testUser: exit status 83 (83.322125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9aa398b4-8c49-44c6-9db4-879843e0e597","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-845000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"48e0e17d-6cf3-44de-a46a-9eca95a97a6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-845000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-845000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-845000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-845000 --output=json --user=testUser: exit status 83 (46.189167ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-845000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-845000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-845000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-845000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-939000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-939000 --driver=qemu2 : exit status 80 (9.77131675s)

                                                
                                                
-- stdout --
	* [first-939000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-939000" primary control-plane node in "first-939000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-939000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-939000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-939000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-07 11:04:26.886676 -0700 PDT m=+447.131762418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-940000 -n second-940000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-940000 -n second-940000: exit status 85 (78.523167ms)

                                                
                                                
-- stdout --
	* Profile "second-940000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-940000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-940000" host is not running, skipping log retrieval (state="* Profile \"second-940000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-940000\"")
helpers_test.go:175: Cleaning up "second-940000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-940000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-07 11:04:27.190281 -0700 PDT m=+447.435378376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-939000 -n first-939000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-939000 -n first-939000: exit status 7 (28.795542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-939000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-939000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-939000
--- FAIL: TestMinikubeProfile (10.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-205000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-205000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.800091375s)

                                                
                                                
-- stdout --
	* [mount-start-1-205000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-205000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-205000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-205000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-205000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-205000 -n mount-start-1-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-205000 -n mount-start-1-205000: exit status 7 (66.37075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.87s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-334000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-334000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.778530875s)

                                                
                                                
-- stdout --
	* [multinode-334000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-334000" primary control-plane node in "multinode-334000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-334000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:04:37.534492   10864 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:04:37.534625   10864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:04:37.534628   10864 out.go:304] Setting ErrFile to fd 2...
	I0507 11:04:37.534630   10864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:04:37.534753   10864 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:04:37.535774   10864 out.go:298] Setting JSON to false
	I0507 11:04:37.551835   10864 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5648,"bootTime":1715099429,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:04:37.551894   10864 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:04:37.557494   10864 out.go:177] * [multinode-334000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:04:37.564405   10864 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:04:37.564455   10864 notify.go:220] Checking for updates...
	I0507 11:04:37.571351   10864 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:04:37.574416   10864 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:04:37.577441   10864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:04:37.580386   10864 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:04:37.583398   10864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:04:37.586548   10864 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:04:37.590316   10864 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:04:37.597405   10864 start.go:297] selected driver: qemu2
	I0507 11:04:37.597413   10864 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:04:37.597420   10864 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:04:37.599601   10864 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:04:37.602379   10864 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:04:37.605478   10864 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:04:37.605500   10864 cni.go:84] Creating CNI manager for ""
	I0507 11:04:37.605506   10864 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0507 11:04:37.605513   10864 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0507 11:04:37.605542   10864 start.go:340] cluster config:
	{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:04:37.609955   10864 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:04:37.617329   10864 out.go:177] * Starting "multinode-334000" primary control-plane node in "multinode-334000" cluster
	I0507 11:04:37.621369   10864 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:04:37.621393   10864 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:04:37.621403   10864 cache.go:56] Caching tarball of preloaded images
	I0507 11:04:37.621463   10864 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:04:37.621468   10864 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:04:37.621651   10864 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/multinode-334000/config.json ...
	I0507 11:04:37.621663   10864 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/multinode-334000/config.json: {Name:mkf96bcd51aae3efe13cdef40b41f369fad4f8a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:04:37.621991   10864 start.go:360] acquireMachinesLock for multinode-334000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:04:37.622025   10864 start.go:364] duration metric: took 28.208µs to acquireMachinesLock for "multinode-334000"
	I0507 11:04:37.622037   10864 start.go:93] Provisioning new machine with config: &{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:multinode-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:04:37.622069   10864 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:04:37.630379   10864 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:04:37.647922   10864 start.go:159] libmachine.API.Create for "multinode-334000" (driver="qemu2")
	I0507 11:04:37.647950   10864 client.go:168] LocalClient.Create starting
	I0507 11:04:37.648016   10864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:04:37.648045   10864 main.go:141] libmachine: Decoding PEM data...
	I0507 11:04:37.648056   10864 main.go:141] libmachine: Parsing certificate...
	I0507 11:04:37.648104   10864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:04:37.648127   10864 main.go:141] libmachine: Decoding PEM data...
	I0507 11:04:37.648136   10864 main.go:141] libmachine: Parsing certificate...
	I0507 11:04:37.648517   10864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:04:37.763980   10864 main.go:141] libmachine: Creating SSH key...
	I0507 11:04:37.882219   10864 main.go:141] libmachine: Creating Disk image...
	I0507 11:04:37.882225   10864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:04:37.882390   10864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2
	I0507 11:04:37.894715   10864 main.go:141] libmachine: STDOUT: 
	I0507 11:04:37.894737   10864 main.go:141] libmachine: STDERR: 
	I0507 11:04:37.894796   10864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2 +20000M
	I0507 11:04:37.905860   10864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:04:37.905876   10864 main.go:141] libmachine: STDERR: 
	I0507 11:04:37.905888   10864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2
	I0507 11:04:37.905892   10864 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:04:37.905921   10864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:34:6d:42:8d:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2
	I0507 11:04:37.907546   10864 main.go:141] libmachine: STDOUT: 
	I0507 11:04:37.907568   10864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:04:37.907590   10864 client.go:171] duration metric: took 259.645375ms to LocalClient.Create
	I0507 11:04:39.909809   10864 start.go:128] duration metric: took 2.287762958s to createHost
	I0507 11:04:39.909880   10864 start.go:83] releasing machines lock for "multinode-334000", held for 2.287926292s
	W0507 11:04:39.909949   10864 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:04:39.916201   10864 out.go:177] * Deleting "multinode-334000" in qemu2 ...
	W0507 11:04:39.939393   10864 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:04:39.939432   10864 start.go:728] Will try again in 5 seconds ...
	I0507 11:04:44.941544   10864 start.go:360] acquireMachinesLock for multinode-334000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:04:44.942068   10864 start.go:364] duration metric: took 414.833µs to acquireMachinesLock for "multinode-334000"
	I0507 11:04:44.942212   10864 start.go:93] Provisioning new machine with config: &{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:multinode-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:04:44.942557   10864 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:04:44.954216   10864 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:04:45.003530   10864 start.go:159] libmachine.API.Create for "multinode-334000" (driver="qemu2")
	I0507 11:04:45.003587   10864 client.go:168] LocalClient.Create starting
	I0507 11:04:45.003693   10864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:04:45.003756   10864 main.go:141] libmachine: Decoding PEM data...
	I0507 11:04:45.003772   10864 main.go:141] libmachine: Parsing certificate...
	I0507 11:04:45.003838   10864 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:04:45.003882   10864 main.go:141] libmachine: Decoding PEM data...
	I0507 11:04:45.003899   10864 main.go:141] libmachine: Parsing certificate...
	I0507 11:04:45.004384   10864 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:04:45.132263   10864 main.go:141] libmachine: Creating SSH key...
	I0507 11:04:45.215393   10864 main.go:141] libmachine: Creating Disk image...
	I0507 11:04:45.215399   10864 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:04:45.215560   10864 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2
	I0507 11:04:45.228125   10864 main.go:141] libmachine: STDOUT: 
	I0507 11:04:45.228152   10864 main.go:141] libmachine: STDERR: 
	I0507 11:04:45.228212   10864 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2 +20000M
	I0507 11:04:45.239011   10864 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:04:45.239039   10864 main.go:141] libmachine: STDERR: 
	I0507 11:04:45.239051   10864 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2
	I0507 11:04:45.239057   10864 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:04:45.239087   10864 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:dc:e3:7d:9e:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2
	I0507 11:04:45.240819   10864 main.go:141] libmachine: STDOUT: 
	I0507 11:04:45.240838   10864 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:04:45.240851   10864 client.go:171] duration metric: took 237.265958ms to LocalClient.Create
	I0507 11:04:47.242947   10864 start.go:128] duration metric: took 2.300436584s to createHost
	I0507 11:04:47.243012   10864 start.go:83] releasing machines lock for "multinode-334000", held for 2.300997791s
	W0507 11:04:47.243517   10864 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-334000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-334000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:04:47.254997   10864 out.go:177] 
	W0507 11:04:47.259140   10864 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:04:47.259166   10864 out.go:239] * 
	* 
	W0507 11:04:47.261784   10864 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:04:47.272133   10864 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-334000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (68.622792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (98.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.867958ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-334000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- rollout status deployment/busybox: exit status 1 (55.344834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.537666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.630542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.660791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.037042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.603417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.673458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.117792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.045167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.7885ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.760042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.624ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.342833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.347958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.033333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (29.140916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (98.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-334000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.998834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (29.108958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-334000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-334000 -v 3 --alsologtostderr: exit status 83 (41.465666ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-334000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:06:25.836934   11006 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:06:25.837089   11006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:25.837092   11006 out.go:304] Setting ErrFile to fd 2...
	I0507 11:06:25.837094   11006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:25.837237   11006 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:06:25.837469   11006 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:06:25.837670   11006 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:06:25.843120   11006 out.go:177] * The control-plane node multinode-334000 host is not running: state=Stopped
	I0507 11:06:25.847009   11006 out.go:177]   To start a cluster, run: "minikube start -p multinode-334000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-334000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (28.997958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-334000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-334000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.638666ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-334000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-334000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-334000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (29.038125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-334000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-334000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-334000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"multinode-334000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (29.169667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status --output json --alsologtostderr: exit status 7 (29.284708ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-334000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:06:26.061838   11019 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:06:26.061996   11019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:26.061999   11019 out.go:304] Setting ErrFile to fd 2...
	I0507 11:06:26.062001   11019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:26.062130   11019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:06:26.062246   11019 out.go:298] Setting JSON to true
	I0507 11:06:26.062256   11019 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:06:26.062313   11019 notify.go:220] Checking for updates...
	I0507 11:06:26.062435   11019 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:06:26.062441   11019 status.go:255] checking status of multinode-334000 ...
	I0507 11:06:26.062657   11019 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0507 11:06:26.062660   11019 status.go:343] host is not running, skipping remaining checks
	I0507 11:06:26.062662   11019 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-334000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (29.289167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 node stop m03: exit status 85 (45.181833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-334000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status: exit status 7 (28.828084ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr: exit status 7 (29.16175ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:06:26.195038   11027 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:06:26.195187   11027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:26.195190   11027 out.go:304] Setting ErrFile to fd 2...
	I0507 11:06:26.195192   11027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:26.195329   11027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:06:26.195460   11027 out.go:298] Setting JSON to false
	I0507 11:06:26.195470   11027 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:06:26.195539   11027 notify.go:220] Checking for updates...
	I0507 11:06:26.195691   11027 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:06:26.195697   11027 status.go:255] checking status of multinode-334000 ...
	I0507 11:06:26.195919   11027 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0507 11:06:26.195923   11027 status.go:343] host is not running, skipping remaining checks
	I0507 11:06:26.195925   11027 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr": multinode-334000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (28.929208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (50.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.516583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:06:26.253329   11031 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:06:26.253715   11031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:26.253719   11031 out.go:304] Setting ErrFile to fd 2...
	I0507 11:06:26.253721   11031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:26.253879   11031 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:06:26.254085   11031 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:06:26.254285   11031 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:06:26.258640   11031 out.go:177] 
	W0507 11:06:26.261837   11031 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0507 11:06:26.261842   11031 out.go:239] * 
	* 
	W0507 11:06:26.263838   11031 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:06:26.267742   11031 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0507 11:06:26.253329   11031 out.go:291] Setting OutFile to fd 1 ...
I0507 11:06:26.253715   11031 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 11:06:26.253719   11031 out.go:304] Setting ErrFile to fd 2...
I0507 11:06:26.253721   11031 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 11:06:26.253879   11031 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
I0507 11:06:26.254085   11031 mustload.go:65] Loading cluster: multinode-334000
I0507 11:06:26.254285   11031 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 11:06:26.258640   11031 out.go:177] 
W0507 11:06:26.261837   11031 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0507 11:06:26.261842   11031 out.go:239] * 
* 
W0507 11:06:26.263838   11031 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0507 11:06:26.267742   11031 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-334000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (29.0605ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:06:26.300042   11033 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:06:26.300200   11033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:26.300203   11033 out.go:304] Setting ErrFile to fd 2...
	I0507 11:06:26.300205   11033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:26.300334   11033 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:06:26.300451   11033 out.go:298] Setting JSON to false
	I0507 11:06:26.300461   11033 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:06:26.300523   11033 notify.go:220] Checking for updates...
	I0507 11:06:26.300659   11033 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:06:26.300665   11033 status.go:255] checking status of multinode-334000 ...
	I0507 11:06:26.300891   11033 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0507 11:06:26.300895   11033 status.go:343] host is not running, skipping remaining checks
	I0507 11:06:26.300898   11033 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (74.412916ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:06:27.824639   11035 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:06:27.824837   11035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:27.824842   11035 out.go:304] Setting ErrFile to fd 2...
	I0507 11:06:27.824846   11035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:27.825040   11035 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:06:27.825228   11035 out.go:298] Setting JSON to false
	I0507 11:06:27.825244   11035 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:06:27.825280   11035 notify.go:220] Checking for updates...
	I0507 11:06:27.825549   11035 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:06:27.825557   11035 status.go:255] checking status of multinode-334000 ...
	I0507 11:06:27.825875   11035 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0507 11:06:27.825880   11035 status.go:343] host is not running, skipping remaining checks
	I0507 11:06:27.825884   11035 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (67.941792ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:06:30.125168   11037 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:06:30.125475   11037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:30.125480   11037 out.go:304] Setting ErrFile to fd 2...
	I0507 11:06:30.125484   11037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:30.125675   11037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:06:30.125880   11037 out.go:298] Setting JSON to false
	I0507 11:06:30.125897   11037 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:06:30.125945   11037 notify.go:220] Checking for updates...
	I0507 11:06:30.126201   11037 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:06:30.126209   11037 status.go:255] checking status of multinode-334000 ...
	I0507 11:06:30.126517   11037 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0507 11:06:30.126522   11037 status.go:343] host is not running, skipping remaining checks
	I0507 11:06:30.126525   11037 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (72.248125ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:06:33.569263   11043 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:06:33.569481   11043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:33.569486   11043 out.go:304] Setting ErrFile to fd 2...
	I0507 11:06:33.569490   11043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:33.569661   11043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:06:33.569835   11043 out.go:298] Setting JSON to false
	I0507 11:06:33.569855   11043 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:06:33.569900   11043 notify.go:220] Checking for updates...
	I0507 11:06:33.570135   11043 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:06:33.570142   11043 status.go:255] checking status of multinode-334000 ...
	I0507 11:06:33.570426   11043 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0507 11:06:33.570431   11043 status.go:343] host is not running, skipping remaining checks
	I0507 11:06:33.570434   11043 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (70.738208ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:06:37.996218   11045 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:06:37.996426   11045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:37.996431   11045 out.go:304] Setting ErrFile to fd 2...
	I0507 11:06:37.996434   11045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:37.996602   11045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:06:37.996770   11045 out.go:298] Setting JSON to false
	I0507 11:06:37.996783   11045 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:06:37.996818   11045 notify.go:220] Checking for updates...
	I0507 11:06:37.997037   11045 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:06:37.997045   11045 status.go:255] checking status of multinode-334000 ...
	I0507 11:06:37.997329   11045 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0507 11:06:37.997335   11045 status.go:343] host is not running, skipping remaining checks
	I0507 11:06:37.997337   11045 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (75.794334ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:06:40.874400   11050 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:06:40.874603   11050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:40.874607   11050 out.go:304] Setting ErrFile to fd 2...
	I0507 11:06:40.874611   11050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:40.874804   11050 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:06:40.874996   11050 out.go:298] Setting JSON to false
	I0507 11:06:40.875013   11050 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:06:40.875085   11050 notify.go:220] Checking for updates...
	I0507 11:06:40.875315   11050 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:06:40.875322   11050 status.go:255] checking status of multinode-334000 ...
	I0507 11:06:40.875630   11050 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0507 11:06:40.875635   11050 status.go:343] host is not running, skipping remaining checks
	I0507 11:06:40.875638   11050 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (72.087791ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:06:51.712839   11060 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:06:51.713055   11060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:51.713059   11060 out.go:304] Setting ErrFile to fd 2...
	I0507 11:06:51.713062   11060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:06:51.713237   11060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:06:51.713399   11060 out.go:298] Setting JSON to false
	I0507 11:06:51.713415   11060 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:06:51.713455   11060 notify.go:220] Checking for updates...
	I0507 11:06:51.713664   11060 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:06:51.713671   11060 status.go:255] checking status of multinode-334000 ...
	I0507 11:06:51.713981   11060 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0507 11:06:51.713987   11060 status.go:343] host is not running, skipping remaining checks
	I0507 11:06:51.713989   11060 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (73.296042ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:07:05.211837   11064 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:07:05.212063   11064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:05.212067   11064 out.go:304] Setting ErrFile to fd 2...
	I0507 11:07:05.212070   11064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:05.212235   11064 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:07:05.212380   11064 out.go:298] Setting JSON to false
	I0507 11:07:05.212393   11064 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:07:05.212437   11064 notify.go:220] Checking for updates...
	I0507 11:07:05.212651   11064 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:07:05.212658   11064 status.go:255] checking status of multinode-334000 ...
	I0507 11:07:05.212936   11064 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0507 11:07:05.212941   11064 status.go:343] host is not running, skipping remaining checks
	I0507 11:07:05.212944   11064 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr: exit status 7 (73.618458ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:07:16.610556   11072 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:07:16.610750   11072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:16.610755   11072 out.go:304] Setting ErrFile to fd 2...
	I0507 11:07:16.610758   11072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:16.610948   11072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:07:16.611125   11072 out.go:298] Setting JSON to false
	I0507 11:07:16.611140   11072 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:07:16.611181   11072 notify.go:220] Checking for updates...
	I0507 11:07:16.611530   11072 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:07:16.611550   11072 status.go:255] checking status of multinode-334000 ...
	I0507 11:07:16.611896   11072 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0507 11:07:16.611902   11072 status.go:343] host is not running, skipping remaining checks
	I0507 11:07:16.611905   11072 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-334000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (32.809625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (50.42s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-334000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-334000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-334000: (3.301154917s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-334000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-334000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.219455625s)

                                                
                                                
-- stdout --
	* [multinode-334000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-334000" primary control-plane node in "multinode-334000" cluster
	* Restarting existing qemu2 VM for "multinode-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:07:20.040433   11102 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:07:20.040594   11102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:20.040598   11102 out.go:304] Setting ErrFile to fd 2...
	I0507 11:07:20.040602   11102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:20.040746   11102 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:07:20.041994   11102 out.go:298] Setting JSON to false
	I0507 11:07:20.061195   11102 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5811,"bootTime":1715099429,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:07:20.061271   11102 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:07:20.066022   11102 out.go:177] * [multinode-334000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:07:20.073990   11102 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:07:20.074045   11102 notify.go:220] Checking for updates...
	I0507 11:07:20.079935   11102 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:07:20.082927   11102 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:07:20.084349   11102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:07:20.086921   11102 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:07:20.090023   11102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:07:20.093306   11102 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:07:20.093380   11102 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:07:20.097945   11102 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 11:07:20.104986   11102 start.go:297] selected driver: qemu2
	I0507 11:07:20.104995   11102 start.go:901] validating driver "qemu2" against &{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:multinode-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:07:20.105056   11102 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:07:20.107484   11102 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:07:20.107517   11102 cni.go:84] Creating CNI manager for ""
	I0507 11:07:20.107522   11102 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0507 11:07:20.107575   11102 start.go:340] cluster config:
	{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-334000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:07:20.112123   11102 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:07:20.118958   11102 out.go:177] * Starting "multinode-334000" primary control-plane node in "multinode-334000" cluster
	I0507 11:07:20.122873   11102 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:07:20.122891   11102 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:07:20.122895   11102 cache.go:56] Caching tarball of preloaded images
	I0507 11:07:20.122954   11102 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:07:20.122960   11102 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:07:20.123028   11102 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/multinode-334000/config.json ...
	I0507 11:07:20.123464   11102 start.go:360] acquireMachinesLock for multinode-334000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:07:20.123504   11102 start.go:364] duration metric: took 32.375µs to acquireMachinesLock for "multinode-334000"
	I0507 11:07:20.123514   11102 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:07:20.123519   11102 fix.go:54] fixHost starting: 
	I0507 11:07:20.123646   11102 fix.go:112] recreateIfNeeded on multinode-334000: state=Stopped err=<nil>
	W0507 11:07:20.123655   11102 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:07:20.130895   11102 out.go:177] * Restarting existing qemu2 VM for "multinode-334000" ...
	I0507 11:07:20.134950   11102 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:dc:e3:7d:9e:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2
	I0507 11:07:20.137036   11102 main.go:141] libmachine: STDOUT: 
	I0507 11:07:20.137059   11102 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:07:20.137089   11102 fix.go:56] duration metric: took 13.569ms for fixHost
	I0507 11:07:20.137093   11102 start.go:83] releasing machines lock for "multinode-334000", held for 13.585291ms
	W0507 11:07:20.137102   11102 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:07:20.137130   11102 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:07:20.137135   11102 start.go:728] Will try again in 5 seconds ...
	I0507 11:07:25.139186   11102 start.go:360] acquireMachinesLock for multinode-334000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:07:25.139637   11102 start.go:364] duration metric: took 321.291µs to acquireMachinesLock for "multinode-334000"
	I0507 11:07:25.139773   11102 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:07:25.139795   11102 fix.go:54] fixHost starting: 
	I0507 11:07:25.140634   11102 fix.go:112] recreateIfNeeded on multinode-334000: state=Stopped err=<nil>
	W0507 11:07:25.140660   11102 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:07:25.145196   11102 out.go:177] * Restarting existing qemu2 VM for "multinode-334000" ...
	I0507 11:07:25.152373   11102 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:dc:e3:7d:9e:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2
	I0507 11:07:25.161635   11102 main.go:141] libmachine: STDOUT: 
	I0507 11:07:25.161795   11102 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:07:25.161897   11102 fix.go:56] duration metric: took 22.100333ms for fixHost
	I0507 11:07:25.161916   11102 start.go:83] releasing machines lock for "multinode-334000", held for 22.256333ms
	W0507 11:07:25.162097   11102 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:07:25.168007   11102 out.go:177] 
	W0507 11:07:25.172102   11102 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:07:25.172126   11102 out.go:239] * 
	* 
	W0507 11:07:25.174796   11102 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:07:25.182083   11102 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-334000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-334000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (32.200208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 node delete m03: exit status 83 (41.182833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-334000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-334000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr: exit status 7 (29.331167ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:07:25.365419   11118 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:07:25.365556   11118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:25.365560   11118 out.go:304] Setting ErrFile to fd 2...
	I0507 11:07:25.365562   11118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:25.365690   11118 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:07:25.365814   11118 out.go:298] Setting JSON to false
	I0507 11:07:25.365825   11118 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:07:25.365891   11118 notify.go:220] Checking for updates...
	I0507 11:07:25.366014   11118 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:07:25.366020   11118 status.go:255] checking status of multinode-334000 ...
	I0507 11:07:25.366219   11118 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0507 11:07:25.366223   11118 status.go:343] host is not running, skipping remaining checks
	I0507 11:07:25.366225   11118 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (29.413416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-334000 stop: (3.115661541s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status: exit status 7 (62.824333ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr: exit status 7 (35.95375ms)

                                                
                                                
-- stdout --
	multinode-334000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:07:28.605435   11144 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:07:28.605584   11144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:28.605587   11144 out.go:304] Setting ErrFile to fd 2...
	I0507 11:07:28.605589   11144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:28.605726   11144 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:07:28.605855   11144 out.go:298] Setting JSON to false
	I0507 11:07:28.605869   11144 mustload.go:65] Loading cluster: multinode-334000
	I0507 11:07:28.605901   11144 notify.go:220] Checking for updates...
	I0507 11:07:28.606060   11144 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:07:28.606066   11144 status.go:255] checking status of multinode-334000 ...
	I0507 11:07:28.606286   11144 status.go:330] multinode-334000 host status = "Stopped" (err=<nil>)
	I0507 11:07:28.606289   11144 status.go:343] host is not running, skipping remaining checks
	I0507 11:07:28.606291   11144 status.go:257] multinode-334000 status: &{Name:multinode-334000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr": multinode-334000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-334000 status --alsologtostderr": multinode-334000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (28.705291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-334000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-334000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.182322s)

                                                
                                                
-- stdout --
	* [multinode-334000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-334000" primary control-plane node in "multinode-334000" cluster
	* Restarting existing qemu2 VM for "multinode-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-334000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:07:28.667577   11148 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:07:28.667697   11148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:28.667704   11148 out.go:304] Setting ErrFile to fd 2...
	I0507 11:07:28.667707   11148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:28.667830   11148 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:07:28.668845   11148 out.go:298] Setting JSON to false
	I0507 11:07:28.684740   11148 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5819,"bootTime":1715099429,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:07:28.684803   11148 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:07:28.690210   11148 out.go:177] * [multinode-334000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:07:28.698266   11148 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:07:28.698324   11148 notify.go:220] Checking for updates...
	I0507 11:07:28.702173   11148 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:07:28.705214   11148 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:07:28.708223   11148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:07:28.711113   11148 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:07:28.714169   11148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:07:28.717555   11148 config.go:182] Loaded profile config "multinode-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:07:28.717831   11148 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:07:28.722155   11148 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 11:07:28.729203   11148 start.go:297] selected driver: qemu2
	I0507 11:07:28.729213   11148 start.go:901] validating driver "qemu2" against &{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:multinode-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:07:28.729271   11148 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:07:28.731542   11148 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:07:28.731568   11148 cni.go:84] Creating CNI manager for ""
	I0507 11:07:28.731573   11148 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0507 11:07:28.731623   11148 start.go:340] cluster config:
	{Name:multinode-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-334000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:07:28.735969   11148 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:07:28.743172   11148 out.go:177] * Starting "multinode-334000" primary control-plane node in "multinode-334000" cluster
	I0507 11:07:28.747187   11148 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:07:28.747203   11148 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:07:28.747210   11148 cache.go:56] Caching tarball of preloaded images
	I0507 11:07:28.747276   11148 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:07:28.747281   11148 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:07:28.747356   11148 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/multinode-334000/config.json ...
	I0507 11:07:28.747775   11148 start.go:360] acquireMachinesLock for multinode-334000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:07:28.747804   11148 start.go:364] duration metric: took 22.917µs to acquireMachinesLock for "multinode-334000"
	I0507 11:07:28.747814   11148 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:07:28.747819   11148 fix.go:54] fixHost starting: 
	I0507 11:07:28.747942   11148 fix.go:112] recreateIfNeeded on multinode-334000: state=Stopped err=<nil>
	W0507 11:07:28.747951   11148 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:07:28.756198   11148 out.go:177] * Restarting existing qemu2 VM for "multinode-334000" ...
	I0507 11:07:28.760248   11148 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:dc:e3:7d:9e:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2
	I0507 11:07:28.762261   11148 main.go:141] libmachine: STDOUT: 
	I0507 11:07:28.762283   11148 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:07:28.762310   11148 fix.go:56] duration metric: took 14.491208ms for fixHost
	I0507 11:07:28.762314   11148 start.go:83] releasing machines lock for "multinode-334000", held for 14.505917ms
	W0507 11:07:28.762321   11148 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:07:28.762351   11148 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:07:28.762355   11148 start.go:728] Will try again in 5 seconds ...
	I0507 11:07:33.764427   11148 start.go:360] acquireMachinesLock for multinode-334000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:07:33.764949   11148 start.go:364] duration metric: took 362.708µs to acquireMachinesLock for "multinode-334000"
	I0507 11:07:33.765095   11148 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:07:33.765119   11148 fix.go:54] fixHost starting: 
	I0507 11:07:33.765837   11148 fix.go:112] recreateIfNeeded on multinode-334000: state=Stopped err=<nil>
	W0507 11:07:33.765864   11148 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:07:33.770393   11148 out.go:177] * Restarting existing qemu2 VM for "multinode-334000" ...
	I0507 11:07:33.778630   11148 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:dc:e3:7d:9e:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/multinode-334000/disk.qcow2
	I0507 11:07:33.788215   11148 main.go:141] libmachine: STDOUT: 
	I0507 11:07:33.788285   11148 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:07:33.788371   11148 fix.go:56] duration metric: took 23.26ms for fixHost
	I0507 11:07:33.788385   11148 start.go:83] releasing machines lock for "multinode-334000", held for 23.414625ms
	W0507 11:07:33.788588   11148 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-334000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:07:33.795419   11148 out.go:177] 
	W0507 11:07:33.799384   11148 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:07:33.799412   11148 out.go:239] * 
	* 
	W0507 11:07:33.801564   11148 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:07:33.809334   11148 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-334000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (67.587583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-334000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-334000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-334000-m01 --driver=qemu2 : exit status 80 (9.875344375s)

                                                
                                                
-- stdout --
	* [multinode-334000-m01] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-334000-m01" primary control-plane node in "multinode-334000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-334000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-334000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-334000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-334000-m02 --driver=qemu2 : exit status 80 (10.22636925s)

                                                
                                                
-- stdout --
	* [multinode-334000-m02] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-334000-m02" primary control-plane node in "multinode-334000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-334000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-334000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-334000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-334000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-334000: exit status 83 (79.512417ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-334000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-334000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-334000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-334000 -n multinode-334000: exit status 7 (29.877667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.35s)

                                                
                                    
x
+
TestPreload (9.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-448000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-448000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.790342792s)

                                                
                                                
-- stdout --
	* [test-preload-448000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-448000" primary control-plane node in "test-preload-448000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-448000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:07:54.391410   11221 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:07:54.391537   11221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:54.391539   11221 out.go:304] Setting ErrFile to fd 2...
	I0507 11:07:54.391542   11221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:07:54.391672   11221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:07:54.392719   11221 out.go:298] Setting JSON to false
	I0507 11:07:54.408737   11221 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5845,"bootTime":1715099429,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:07:54.408826   11221 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:07:54.414590   11221 out.go:177] * [test-preload-448000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:07:54.421585   11221 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:07:54.421649   11221 notify.go:220] Checking for updates...
	I0507 11:07:54.428522   11221 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:07:54.431490   11221 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:07:54.434543   11221 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:07:54.437542   11221 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:07:54.440476   11221 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:07:54.443884   11221 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:07:54.443939   11221 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:07:54.448536   11221 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:07:54.455523   11221 start.go:297] selected driver: qemu2
	I0507 11:07:54.455530   11221 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:07:54.455537   11221 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:07:54.457874   11221 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:07:54.460502   11221 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:07:54.463614   11221 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:07:54.463636   11221 cni.go:84] Creating CNI manager for ""
	I0507 11:07:54.463647   11221 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:07:54.463652   11221 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 11:07:54.463685   11221 start.go:340] cluster config:
	{Name:test-preload-448000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-448000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:07:54.468133   11221 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:07:54.475573   11221 out.go:177] * Starting "test-preload-448000" primary control-plane node in "test-preload-448000" cluster
	I0507 11:07:54.479523   11221 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0507 11:07:54.479643   11221 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/test-preload-448000/config.json ...
	I0507 11:07:54.479643   11221 cache.go:107] acquiring lock: {Name:mk93cab9782caf818e2fce3a23d39a17d84a3524 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:07:54.479661   11221 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/test-preload-448000/config.json: {Name:mk8ad358202b7011e37cd3256a18ad984fde31ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:07:54.479650   11221 cache.go:107] acquiring lock: {Name:mka5f207418ef8c37dd55c91253fc9d34605943f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:07:54.479663   11221 cache.go:107] acquiring lock: {Name:mk141aab51427b05ff4133043f919921b2a4a6c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:07:54.479676   11221 cache.go:107] acquiring lock: {Name:mk2e7e5b010552176f9d11821dfc8ed09e4aa294 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:07:54.479685   11221 cache.go:107] acquiring lock: {Name:mk06535c900cb6330e3b6440930374f7b7c72b4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:07:54.479644   11221 cache.go:107] acquiring lock: {Name:mk8aa49ec51e728f5abf889afb1eece7afa5fca7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:07:54.479801   11221 cache.go:107] acquiring lock: {Name:mkbb7e0fdca17549fbf44c33e10f55204d8dcc14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:07:54.479849   11221 cache.go:107] acquiring lock: {Name:mk216ede0750739781d6832242a85501b2233e35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:07:54.479972   11221 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0507 11:07:54.479998   11221 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0507 11:07:54.480026   11221 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0507 11:07:54.480054   11221 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0507 11:07:54.480069   11221 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0507 11:07:54.480155   11221 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:07:54.480183   11221 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0507 11:07:54.480274   11221 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:07:54.480287   11221 start.go:360] acquireMachinesLock for test-preload-448000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:07:54.480333   11221 start.go:364] duration metric: took 35.542µs to acquireMachinesLock for "test-preload-448000"
	I0507 11:07:54.480344   11221 start.go:93] Provisioning new machine with config: &{Name:test-preload-448000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-448000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:07:54.480381   11221 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:07:54.488524   11221 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:07:54.494175   11221 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0507 11:07:54.494302   11221 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0507 11:07:54.494855   11221 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0507 11:07:54.495003   11221 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0507 11:07:54.495047   11221 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0507 11:07:54.498268   11221 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:07:54.499229   11221 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:07:54.499368   11221 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0507 11:07:54.507187   11221 start.go:159] libmachine.API.Create for "test-preload-448000" (driver="qemu2")
	I0507 11:07:54.507214   11221 client.go:168] LocalClient.Create starting
	I0507 11:07:54.507303   11221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:07:54.507333   11221 main.go:141] libmachine: Decoding PEM data...
	I0507 11:07:54.507343   11221 main.go:141] libmachine: Parsing certificate...
	I0507 11:07:54.507382   11221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:07:54.507410   11221 main.go:141] libmachine: Decoding PEM data...
	I0507 11:07:54.507420   11221 main.go:141] libmachine: Parsing certificate...
	I0507 11:07:54.507722   11221 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:07:54.634305   11221 main.go:141] libmachine: Creating SSH key...
	I0507 11:07:54.719249   11221 main.go:141] libmachine: Creating Disk image...
	I0507 11:07:54.719267   11221 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:07:54.719427   11221 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/disk.qcow2
	I0507 11:07:54.732241   11221 main.go:141] libmachine: STDOUT: 
	I0507 11:07:54.732270   11221 main.go:141] libmachine: STDERR: 
	I0507 11:07:54.732346   11221 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/disk.qcow2 +20000M
	I0507 11:07:54.744539   11221 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:07:54.744559   11221 main.go:141] libmachine: STDERR: 
	I0507 11:07:54.744572   11221 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/disk.qcow2
	I0507 11:07:54.744576   11221 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:07:54.744603   11221 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:00:a4:b9:5c:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/disk.qcow2
	I0507 11:07:54.746556   11221 main.go:141] libmachine: STDOUT: 
	I0507 11:07:54.746575   11221 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:07:54.746594   11221 client.go:171] duration metric: took 239.383542ms to LocalClient.Create
	I0507 11:07:55.509209   11221 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0507 11:07:55.537695   11221 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0507 11:07:55.537778   11221 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0507 11:07:55.550088   11221 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0507 11:07:55.557368   11221 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0507 11:07:55.582167   11221 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0507 11:07:55.671667   11221 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0507 11:07:55.671740   11221 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.192133792s
	I0507 11:07:55.671787   11221 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0507 11:07:55.748404   11221 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0507 11:07:55.792791   11221 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0507 11:07:55.792875   11221 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0507 11:07:55.794101   11221 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0507 11:07:55.834734   11221 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0507 11:07:55.834790   11221 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.355195458s
	I0507 11:07:55.834812   11221 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0507 11:07:56.746845   11221 start.go:128] duration metric: took 2.266506875s to createHost
	I0507 11:07:56.746918   11221 start.go:83] releasing machines lock for "test-preload-448000", held for 2.266651375s
	W0507 11:07:56.746982   11221 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:07:56.758053   11221 out.go:177] * Deleting "test-preload-448000" in qemu2 ...
	W0507 11:07:56.779931   11221 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:07:56.779969   11221 start.go:728] Will try again in 5 seconds ...
	I0507 11:07:57.140304   11221 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0507 11:07:57.140371   11221 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.660770667s
	I0507 11:07:57.140409   11221 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0507 11:07:57.801675   11221 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0507 11:07:57.801720   11221 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.32216875s
	I0507 11:07:57.801778   11221 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0507 11:07:59.118002   11221 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0507 11:07:59.118078   11221 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.638588417s
	I0507 11:07:59.118109   11221 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0507 11:07:59.256043   11221 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0507 11:07:59.256093   11221 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.776577625s
	I0507 11:07:59.256117   11221 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0507 11:08:01.780035   11221 start.go:360] acquireMachinesLock for test-preload-448000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:08:01.780472   11221 start.go:364] duration metric: took 349.333µs to acquireMachinesLock for "test-preload-448000"
	I0507 11:08:01.780600   11221 start.go:93] Provisioning new machine with config: &{Name:test-preload-448000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-448000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:08:01.780884   11221 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:08:01.787504   11221 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:08:01.838353   11221 start.go:159] libmachine.API.Create for "test-preload-448000" (driver="qemu2")
	I0507 11:08:01.838427   11221 client.go:168] LocalClient.Create starting
	I0507 11:08:01.838563   11221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:08:01.838645   11221 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:01.838673   11221 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:01.838754   11221 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:08:01.838817   11221 main.go:141] libmachine: Decoding PEM data...
	I0507 11:08:01.838847   11221 main.go:141] libmachine: Parsing certificate...
	I0507 11:08:01.839404   11221 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:08:01.897614   11221 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0507 11:08:01.897637   11221 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.418069791s
	I0507 11:08:01.897651   11221 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0507 11:08:01.971619   11221 main.go:141] libmachine: Creating SSH key...
	I0507 11:08:02.083965   11221 main.go:141] libmachine: Creating Disk image...
	I0507 11:08:02.083971   11221 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:08:02.084136   11221 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/disk.qcow2
	I0507 11:08:02.096946   11221 main.go:141] libmachine: STDOUT: 
	I0507 11:08:02.096969   11221 main.go:141] libmachine: STDERR: 
	I0507 11:08:02.097019   11221 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/disk.qcow2 +20000M
	I0507 11:08:02.108273   11221 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:08:02.108292   11221 main.go:141] libmachine: STDERR: 
	I0507 11:08:02.108304   11221 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/disk.qcow2
	I0507 11:08:02.108307   11221 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:08:02.108354   11221 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:3d:31:42:23:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/test-preload-448000/disk.qcow2
	I0507 11:08:02.110378   11221 main.go:141] libmachine: STDOUT: 
	I0507 11:08:02.110394   11221 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:08:02.110406   11221 client.go:171] duration metric: took 271.983375ms to LocalClient.Create
	I0507 11:08:02.833687   11221 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0507 11:08:02.833781   11221 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.354269875s
	I0507 11:08:02.833806   11221 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0507 11:08:02.833866   11221 cache.go:87] Successfully saved all images to host disk.
	I0507 11:08:04.112618   11221 start.go:128] duration metric: took 2.331770333s to createHost
	I0507 11:08:04.112704   11221 start.go:83] releasing machines lock for "test-preload-448000", held for 2.332285292s
	W0507 11:08:04.113016   11221 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-448000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-448000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:08:04.124514   11221 out.go:177] 
	W0507 11:08:04.128557   11221 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:08:04.128583   11221 out.go:239] * 
	* 
	W0507 11:08:04.131196   11221 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:08:04.141497   11221 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-448000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-05-07 11:08:04.15667 -0700 PDT m=+664.409436501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-448000 -n test-preload-448000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-448000 -n test-preload-448000: exit status 7 (68.096833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-448000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-448000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-448000
--- FAIL: TestPreload (9.96s)

                                                
                                    
x
+
TestScheduledStopUnix (9.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-880000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-880000 --memory=2048 --driver=qemu2 : exit status 80 (9.720724625s)

                                                
                                                
-- stdout --
	* [scheduled-stop-880000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-880000" primary control-plane node in "scheduled-stop-880000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-880000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-880000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-880000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-880000" primary control-plane node in "scheduled-stop-880000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-880000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-880000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-05-07 11:08:14.043941 -0700 PDT m=+674.297055960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-880000 -n scheduled-stop-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-880000 -n scheduled-stop-880000: exit status 7 (66.316917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-880000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-880000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-880000
--- FAIL: TestScheduledStopUnix (9.89s)

                                                
                                    
x
+
TestSkaffold (12.8s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe596130305 version
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-265000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-265000 --memory=2600 --driver=qemu2 : exit status 80 (9.911967125s)

                                                
                                                
-- stdout --
	* [skaffold-265000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-265000" primary control-plane node in "skaffold-265000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-265000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-265000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-265000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-265000" primary control-plane node in "skaffold-265000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-265000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-265000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-05-07 11:08:26.848012 -0700 PDT m=+687.101580085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-265000 -n skaffold-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-265000 -n skaffold-265000: exit status 7 (63.062875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-265000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-265000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-265000
--- FAIL: TestSkaffold (12.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (598.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.761083898 start -p running-upgrade-776000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.761083898 start -p running-upgrade-776000 --memory=2200 --vm-driver=qemu2 : (52.382179083s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-776000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-776000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m31.867454959s)

                                                
                                                
-- stdout --
	* [running-upgrade-776000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-776000" primary control-plane node in "running-upgrade-776000" cluster
	* Updating the running qemu2 "running-upgrade-776000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:10:00.987247   11681 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:10:00.987394   11681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:10:00.987400   11681 out.go:304] Setting ErrFile to fd 2...
	I0507 11:10:00.987402   11681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:10:00.987531   11681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:10:00.988794   11681 out.go:298] Setting JSON to false
	I0507 11:10:01.006035   11681 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5971,"bootTime":1715099429,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:10:01.006102   11681 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:10:01.010854   11681 out.go:177] * [running-upgrade-776000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:10:01.017818   11681 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:10:01.021658   11681 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:10:01.017891   11681 notify.go:220] Checking for updates...
	I0507 11:10:01.029769   11681 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:10:01.032755   11681 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:10:01.035746   11681 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:10:01.038787   11681 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:10:01.044226   11681 config.go:182] Loaded profile config "running-upgrade-776000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:10:01.047743   11681 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0507 11:10:01.050753   11681 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:10:01.054655   11681 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 11:10:01.061794   11681 start.go:297] selected driver: qemu2
	I0507 11:10:01.061799   11681 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-776000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51264 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-776000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0507 11:10:01.061846   11681 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:10:01.064013   11681 cni.go:84] Creating CNI manager for ""
	I0507 11:10:01.064030   11681 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:10:01.064057   11681 start.go:340] cluster config:
	{Name:running-upgrade-776000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51264 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-776000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0507 11:10:01.064110   11681 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:10:01.071775   11681 out.go:177] * Starting "running-upgrade-776000" primary control-plane node in "running-upgrade-776000" cluster
	I0507 11:10:01.075802   11681 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0507 11:10:01.075826   11681 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0507 11:10:01.075832   11681 cache.go:56] Caching tarball of preloaded images
	I0507 11:10:01.075892   11681 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:10:01.075897   11681 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0507 11:10:01.075949   11681 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/config.json ...
	I0507 11:10:01.076442   11681 start.go:360] acquireMachinesLock for running-upgrade-776000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:10:01.076475   11681 start.go:364] duration metric: took 27.958µs to acquireMachinesLock for "running-upgrade-776000"
	I0507 11:10:01.076484   11681 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:10:01.076489   11681 fix.go:54] fixHost starting: 
	I0507 11:10:01.077171   11681 fix.go:112] recreateIfNeeded on running-upgrade-776000: state=Running err=<nil>
	W0507 11:10:01.077178   11681 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:10:01.084800   11681 out.go:177] * Updating the running qemu2 "running-upgrade-776000" VM ...
	I0507 11:10:01.088788   11681 machine.go:94] provisionDockerMachine start ...
	I0507 11:10:01.088824   11681 main.go:141] libmachine: Using SSH client type: native
	I0507 11:10:01.088930   11681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9c80] 0x1009bc4e0 <nil>  [] 0s} localhost 51232 <nil> <nil>}
	I0507 11:10:01.088935   11681 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 11:10:01.157544   11681 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-776000
	
	I0507 11:10:01.157567   11681 buildroot.go:166] provisioning hostname "running-upgrade-776000"
	I0507 11:10:01.157633   11681 main.go:141] libmachine: Using SSH client type: native
	I0507 11:10:01.157768   11681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9c80] 0x1009bc4e0 <nil>  [] 0s} localhost 51232 <nil> <nil>}
	I0507 11:10:01.157773   11681 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-776000 && echo "running-upgrade-776000" | sudo tee /etc/hostname
	I0507 11:10:01.228549   11681 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-776000
	
	I0507 11:10:01.228596   11681 main.go:141] libmachine: Using SSH client type: native
	I0507 11:10:01.228696   11681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9c80] 0x1009bc4e0 <nil>  [] 0s} localhost 51232 <nil> <nil>}
	I0507 11:10:01.228707   11681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-776000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-776000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-776000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 11:10:01.297280   11681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 11:10:01.297303   11681 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18804-8175/.minikube CaCertPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18804-8175/.minikube}
	I0507 11:10:01.297316   11681 buildroot.go:174] setting up certificates
	I0507 11:10:01.297320   11681 provision.go:84] configureAuth start
	I0507 11:10:01.297325   11681 provision.go:143] copyHostCerts
	I0507 11:10:01.297417   11681 exec_runner.go:144] found /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.pem, removing ...
	I0507 11:10:01.297424   11681 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.pem
	I0507 11:10:01.297558   11681 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.pem (1078 bytes)
	I0507 11:10:01.297763   11681 exec_runner.go:144] found /Users/jenkins/minikube-integration/18804-8175/.minikube/cert.pem, removing ...
	I0507 11:10:01.297766   11681 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18804-8175/.minikube/cert.pem
	I0507 11:10:01.297816   11681 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18804-8175/.minikube/cert.pem (1123 bytes)
	I0507 11:10:01.297918   11681 exec_runner.go:144] found /Users/jenkins/minikube-integration/18804-8175/.minikube/key.pem, removing ...
	I0507 11:10:01.297921   11681 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18804-8175/.minikube/key.pem
	I0507 11:10:01.297963   11681 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18804-8175/.minikube/key.pem (1675 bytes)
	I0507 11:10:01.298046   11681 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-776000 san=[127.0.0.1 localhost minikube running-upgrade-776000]
	I0507 11:10:01.396368   11681 provision.go:177] copyRemoteCerts
	I0507 11:10:01.396411   11681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 11:10:01.396430   11681 sshutil.go:53] new ssh client: &{IP:localhost Port:51232 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/running-upgrade-776000/id_rsa Username:docker}
	I0507 11:10:01.433093   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0507 11:10:01.439896   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0507 11:10:01.446622   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0507 11:10:01.453608   11681 provision.go:87] duration metric: took 156.283625ms to configureAuth
	I0507 11:10:01.453619   11681 buildroot.go:189] setting minikube options for container-runtime
	I0507 11:10:01.453728   11681 config.go:182] Loaded profile config "running-upgrade-776000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:10:01.453765   11681 main.go:141] libmachine: Using SSH client type: native
	I0507 11:10:01.453860   11681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9c80] 0x1009bc4e0 <nil>  [] 0s} localhost 51232 <nil> <nil>}
	I0507 11:10:01.453865   11681 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 11:10:01.521478   11681 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 11:10:01.521487   11681 buildroot.go:70] root file system type: tmpfs
	I0507 11:10:01.521554   11681 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 11:10:01.521600   11681 main.go:141] libmachine: Using SSH client type: native
	I0507 11:10:01.521703   11681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9c80] 0x1009bc4e0 <nil>  [] 0s} localhost 51232 <nil> <nil>}
	I0507 11:10:01.521735   11681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 11:10:01.597244   11681 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 11:10:01.597303   11681 main.go:141] libmachine: Using SSH client type: native
	I0507 11:10:01.597416   11681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9c80] 0x1009bc4e0 <nil>  [] 0s} localhost 51232 <nil> <nil>}
	I0507 11:10:01.597424   11681 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 11:10:01.666871   11681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 11:10:01.666882   11681 machine.go:97] duration metric: took 578.108ms to provisionDockerMachine
	I0507 11:10:01.666888   11681 start.go:293] postStartSetup for "running-upgrade-776000" (driver="qemu2")
	I0507 11:10:01.666894   11681 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 11:10:01.666943   11681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 11:10:01.666952   11681 sshutil.go:53] new ssh client: &{IP:localhost Port:51232 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/running-upgrade-776000/id_rsa Username:docker}
	I0507 11:10:01.704045   11681 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 11:10:01.705320   11681 info.go:137] Remote host: Buildroot 2021.02.12
	I0507 11:10:01.705330   11681 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18804-8175/.minikube/addons for local assets ...
	I0507 11:10:01.705415   11681 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18804-8175/.minikube/files for local assets ...
	I0507 11:10:01.705530   11681 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18804-8175/.minikube/files/etc/ssl/certs/94222.pem -> 94222.pem in /etc/ssl/certs
	I0507 11:10:01.705658   11681 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 11:10:01.708255   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/files/etc/ssl/certs/94222.pem --> /etc/ssl/certs/94222.pem (1708 bytes)
	I0507 11:10:01.714795   11681 start.go:296] duration metric: took 47.904666ms for postStartSetup
	I0507 11:10:01.714807   11681 fix.go:56] duration metric: took 638.341334ms for fixHost
	I0507 11:10:01.714837   11681 main.go:141] libmachine: Using SSH client type: native
	I0507 11:10:01.714935   11681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009b9c80] 0x1009bc4e0 <nil>  [] 0s} localhost 51232 <nil> <nil>}
	I0507 11:10:01.714944   11681 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0507 11:10:01.784026   11681 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715105401.549161097
	
	I0507 11:10:01.784034   11681 fix.go:216] guest clock: 1715105401.549161097
	I0507 11:10:01.784038   11681 fix.go:229] Guest: 2024-05-07 11:10:01.549161097 -0700 PDT Remote: 2024-05-07 11:10:01.714809 -0700 PDT m=+0.746332043 (delta=-165.647903ms)
	I0507 11:10:01.784055   11681 fix.go:200] guest clock delta is within tolerance: -165.647903ms
	I0507 11:10:01.784057   11681 start.go:83] releasing machines lock for "running-upgrade-776000", held for 707.602958ms
	I0507 11:10:01.784108   11681 ssh_runner.go:195] Run: cat /version.json
	I0507 11:10:01.784115   11681 sshutil.go:53] new ssh client: &{IP:localhost Port:51232 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/running-upgrade-776000/id_rsa Username:docker}
	I0507 11:10:01.784119   11681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 11:10:01.784151   11681 sshutil.go:53] new ssh client: &{IP:localhost Port:51232 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/running-upgrade-776000/id_rsa Username:docker}
	W0507 11:10:01.784860   11681 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51232: connect: connection refused
	I0507 11:10:01.784881   11681 retry.go:31] will retry after 196.769339ms: dial tcp [::1]:51232: connect: connection refused
	W0507 11:10:01.818279   11681 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0507 11:10:01.818328   11681 ssh_runner.go:195] Run: systemctl --version
	I0507 11:10:01.820329   11681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0507 11:10:01.821945   11681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 11:10:01.821977   11681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0507 11:10:01.824653   11681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0507 11:10:01.828813   11681 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 11:10:01.828821   11681 start.go:494] detecting cgroup driver to use...
	I0507 11:10:01.828922   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 11:10:01.834117   11681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0507 11:10:01.837328   11681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 11:10:01.840314   11681 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 11:10:01.840336   11681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 11:10:01.843171   11681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 11:10:01.846417   11681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 11:10:01.849725   11681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 11:10:01.852662   11681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 11:10:01.855589   11681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 11:10:01.858765   11681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 11:10:01.862235   11681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 11:10:01.865513   11681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 11:10:01.867991   11681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 11:10:01.870944   11681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:10:01.964680   11681 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 11:10:01.973175   11681 start.go:494] detecting cgroup driver to use...
	I0507 11:10:01.973250   11681 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 11:10:01.978569   11681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 11:10:01.983734   11681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 11:10:01.990252   11681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 11:10:01.995193   11681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 11:10:01.999842   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 11:10:02.005334   11681 ssh_runner.go:195] Run: which cri-dockerd
	I0507 11:10:02.006514   11681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 11:10:02.009099   11681 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 11:10:02.014215   11681 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 11:10:02.104480   11681 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 11:10:02.204294   11681 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 11:10:02.204367   11681 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 11:10:02.209697   11681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:10:02.294958   11681 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 11:10:14.915295   11681 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.620753375s)
	I0507 11:10:14.915368   11681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 11:10:14.920003   11681 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0507 11:10:14.928744   11681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 11:10:14.933570   11681 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 11:10:15.016427   11681 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 11:10:15.094700   11681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:10:15.172729   11681 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 11:10:15.179135   11681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 11:10:15.183808   11681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:10:15.264614   11681 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 11:10:15.302359   11681 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 11:10:15.302444   11681 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 11:10:15.304388   11681 start.go:562] Will wait 60s for crictl version
	I0507 11:10:15.304438   11681 ssh_runner.go:195] Run: which crictl
	I0507 11:10:15.305853   11681 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 11:10:15.317522   11681 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0507 11:10:15.317596   11681 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 11:10:15.330029   11681 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 11:10:15.349732   11681 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0507 11:10:15.349798   11681 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0507 11:10:15.351202   11681 kubeadm.go:877] updating cluster {Name:running-upgrade-776000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51264 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-776000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0507 11:10:15.351245   11681 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0507 11:10:15.351285   11681 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 11:10:15.362273   11681 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0507 11:10:15.362282   11681 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0507 11:10:15.362327   11681 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 11:10:15.365479   11681 ssh_runner.go:195] Run: which lz4
	I0507 11:10:15.366767   11681 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0507 11:10:15.367950   11681 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0507 11:10:15.367960   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0507 11:10:16.068759   11681 docker.go:649] duration metric: took 702.048833ms to copy over tarball
	I0507 11:10:16.068830   11681 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0507 11:10:17.445571   11681 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.376776959s)
	I0507 11:10:17.445585   11681 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0507 11:10:17.462733   11681 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 11:10:17.466165   11681 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0507 11:10:17.471337   11681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:10:17.559542   11681 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 11:10:18.789690   11681 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.23017675s)
	I0507 11:10:18.789775   11681 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 11:10:18.806377   11681 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0507 11:10:18.806387   11681 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0507 11:10:18.806392   11681 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0507 11:10:18.814156   11681 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:10:18.814210   11681 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0507 11:10:18.814298   11681 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0507 11:10:18.814435   11681 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:10:18.814448   11681 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:10:18.814517   11681 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:10:18.814562   11681 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:10:18.814639   11681 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0507 11:10:18.822230   11681 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0507 11:10:18.823773   11681 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0507 11:10:18.823797   11681 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:10:18.823841   11681 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:10:18.823840   11681 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:10:18.824560   11681 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:10:18.824622   11681 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:10:18.824703   11681 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	W0507 11:10:19.795384   11681 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0507 11:10:19.795960   11681 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:10:19.837355   11681 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0507 11:10:19.837440   11681 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:10:19.837532   11681 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:10:19.862312   11681 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0507 11:10:19.862453   11681 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0507 11:10:19.864500   11681 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0507 11:10:19.864523   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0507 11:10:19.884074   11681 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0507 11:10:19.893329   11681 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0507 11:10:19.893345   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	W0507 11:10:19.894903   11681 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0507 11:10:19.895033   11681 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:10:19.901008   11681 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0507 11:10:19.901032   11681 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0507 11:10:19.901073   11681 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0507 11:10:19.952273   11681 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0507 11:10:20.011806   11681 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:10:20.030044   11681 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:10:20.063017   11681 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:10:20.065248   11681 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0507 11:10:20.160237   11681 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0507 11:10:20.160260   11681 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:10:20.160267   11681 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0507 11:10:20.160291   11681 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0507 11:10:20.160295   11681 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0507 11:10:20.160327   11681 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0507 11:10:20.160340   11681 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:10:20.160345   11681 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0507 11:10:20.160356   11681 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:10:20.160370   11681 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:10:20.160376   11681 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:10:20.160391   11681 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0507 11:10:20.160388   11681 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0507 11:10:20.160410   11681 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0507 11:10:20.160415   11681 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0507 11:10:20.160316   11681 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0507 11:10:20.160424   11681 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:10:20.160434   11681 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0507 11:10:20.160437   11681 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0507 11:10:20.160314   11681 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:10:20.160447   11681 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:10:20.199532   11681 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0507 11:10:20.199538   11681 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0507 11:10:20.199587   11681 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0507 11:10:20.199615   11681 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0507 11:10:20.199617   11681 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0507 11:10:20.199627   11681 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0507 11:10:20.199653   11681 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0507 11:10:20.199664   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0507 11:10:20.199657   11681 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0507 11:10:20.199677   11681 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0507 11:10:20.210358   11681 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0507 11:10:20.210394   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0507 11:10:20.210455   11681 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0507 11:10:20.210465   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0507 11:10:20.222901   11681 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0507 11:10:20.222915   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0507 11:10:20.300529   11681 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0507 11:10:20.307523   11681 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0507 11:10:20.307539   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0507 11:10:20.384552   11681 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0507 11:10:20.460590   11681 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0507 11:10:20.460610   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0507 11:10:20.634356   11681 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0507 11:10:20.634392   11681 cache_images.go:92] duration metric: took 1.828057083s to LoadCachedImages
	W0507 11:10:20.634435   11681 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0507 11:10:20.634441   11681 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0507 11:10:20.634513   11681 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-776000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-776000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 11:10:20.634578   11681 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0507 11:10:20.648011   11681 cni.go:84] Creating CNI manager for ""
	I0507 11:10:20.648023   11681 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:10:20.648028   11681 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0507 11:10:20.648036   11681 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-776000 NodeName:running-upgrade-776000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0507 11:10:20.648099   11681 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-776000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0507 11:10:20.648153   11681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0507 11:10:20.651301   11681 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 11:10:20.651326   11681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0507 11:10:20.654259   11681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0507 11:10:20.659228   11681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 11:10:20.664134   11681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0507 11:10:20.669160   11681 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0507 11:10:20.670442   11681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:10:20.750963   11681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 11:10:20.755840   11681 certs.go:68] Setting up /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000 for IP: 10.0.2.15
	I0507 11:10:20.755846   11681 certs.go:194] generating shared ca certs ...
	I0507 11:10:20.755854   11681 certs.go:226] acquiring lock for ca certs: {Name:mk0fe80b930eecdc420c4c0ef01e5eae3fea7733 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:10:20.756089   11681 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.key
	I0507 11:10:20.756140   11681 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/proxy-client-ca.key
	I0507 11:10:20.756145   11681 certs.go:256] generating profile certs ...
	I0507 11:10:20.756210   11681 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/client.key
	I0507 11:10:20.756224   11681 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/apiserver.key.a56ed742
	I0507 11:10:20.756232   11681 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/apiserver.crt.a56ed742 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0507 11:10:20.802818   11681 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/apiserver.crt.a56ed742 ...
	I0507 11:10:20.802823   11681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/apiserver.crt.a56ed742: {Name:mk3bf4801665b67d68e89564bc2c2a837d37fd85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:10:20.803043   11681 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/apiserver.key.a56ed742 ...
	I0507 11:10:20.803048   11681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/apiserver.key.a56ed742: {Name:mkeb556038a25f58e769940a758fcf060c8b560e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:10:20.803168   11681 certs.go:381] copying /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/apiserver.crt.a56ed742 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/apiserver.crt
	I0507 11:10:20.803309   11681 certs.go:385] copying /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/apiserver.key.a56ed742 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/apiserver.key
	I0507 11:10:20.803440   11681 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/proxy-client.key
	I0507 11:10:20.803562   11681 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/9422.pem (1338 bytes)
	W0507 11:10:20.803589   11681 certs.go:480] ignoring /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/9422_empty.pem, impossibly tiny 0 bytes
	I0507 11:10:20.803594   11681 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca-key.pem (1679 bytes)
	I0507 11:10:20.803612   11681 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem (1078 bytes)
	I0507 11:10:20.803634   11681 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem (1123 bytes)
	I0507 11:10:20.803651   11681 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/key.pem (1675 bytes)
	I0507 11:10:20.803688   11681 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/files/etc/ssl/certs/94222.pem (1708 bytes)
	I0507 11:10:20.804013   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 11:10:20.811189   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 11:10:20.817743   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 11:10:20.824599   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0507 11:10:20.831673   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0507 11:10:20.838786   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0507 11:10:20.845533   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 11:10:20.852405   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0507 11:10:20.859598   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/9422.pem --> /usr/share/ca-certificates/9422.pem (1338 bytes)
	I0507 11:10:20.866629   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/files/etc/ssl/certs/94222.pem --> /usr/share/ca-certificates/94222.pem (1708 bytes)
	I0507 11:10:20.873045   11681 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 11:10:20.881654   11681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0507 11:10:20.887371   11681 ssh_runner.go:195] Run: openssl version
	I0507 11:10:20.889243   11681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9422.pem && ln -fs /usr/share/ca-certificates/9422.pem /etc/ssl/certs/9422.pem"
	I0507 11:10:20.892172   11681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9422.pem
	I0507 11:10:20.893651   11681 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 17:57 /usr/share/ca-certificates/9422.pem
	I0507 11:10:20.893680   11681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9422.pem
	I0507 11:10:20.895515   11681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9422.pem /etc/ssl/certs/51391683.0"
	I0507 11:10:20.898916   11681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94222.pem && ln -fs /usr/share/ca-certificates/94222.pem /etc/ssl/certs/94222.pem"
	I0507 11:10:20.901888   11681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94222.pem
	I0507 11:10:20.903287   11681 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 17:57 /usr/share/ca-certificates/94222.pem
	I0507 11:10:20.903306   11681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94222.pem
	I0507 11:10:20.905121   11681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94222.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 11:10:20.907766   11681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 11:10:20.911305   11681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 11:10:20.912800   11681 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:09 /usr/share/ca-certificates/minikubeCA.pem
	I0507 11:10:20.912820   11681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 11:10:20.914723   11681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 11:10:20.917721   11681 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 11:10:20.919173   11681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0507 11:10:20.920957   11681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0507 11:10:20.922698   11681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0507 11:10:20.924661   11681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0507 11:10:20.926793   11681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0507 11:10:20.928808   11681 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0507 11:10:20.931446   11681 kubeadm.go:391] StartCluster: {Name:running-upgrade-776000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51264 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-776000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0507 11:10:20.931516   11681 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 11:10:20.942142   11681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0507 11:10:20.945809   11681 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0507 11:10:20.945815   11681 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0507 11:10:20.945818   11681 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0507 11:10:20.945846   11681 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0507 11:10:20.948310   11681 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0507 11:10:20.948347   11681 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-776000" does not appear in /Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:10:20.948364   11681 kubeconfig.go:62] /Users/jenkins/minikube-integration/18804-8175/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-776000" cluster setting kubeconfig missing "running-upgrade-776000" context setting]
	I0507 11:10:20.948536   11681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/kubeconfig: {Name:mk2a7794036857fd378216b160722b418b125ac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:10:20.949218   11681 kapi.go:59] client config for running-upgrade-776000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/client.key", CAFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101d4bd80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 11:10:20.950013   11681 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0507 11:10:20.953025   11681 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-776000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0507 11:10:20.953031   11681 kubeadm.go:1154] stopping kube-system containers ...
	I0507 11:10:20.953073   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 11:10:20.963647   11681 docker.go:483] Stopping containers: [afe496417406 aeccf801a1a2 19cb77da29da 54707531319b 226bf10c3f6e 090f2479094d d08ec0b2ca9a 9341018c98d9 d10c62a82fe9 7c957105b490 6ba6837d6418 522ed5b22943 12aaa8a6fc4e e30a9643f9b5 21b3acc0d61c]
	I0507 11:10:20.963716   11681 ssh_runner.go:195] Run: docker stop afe496417406 aeccf801a1a2 19cb77da29da 54707531319b 226bf10c3f6e 090f2479094d d08ec0b2ca9a 9341018c98d9 d10c62a82fe9 7c957105b490 6ba6837d6418 522ed5b22943 12aaa8a6fc4e e30a9643f9b5 21b3acc0d61c
	I0507 11:10:20.975491   11681 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0507 11:10:21.062150   11681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 11:10:21.065992   11681 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 May  7 18:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 May  7 18:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 May  7 18:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 May  7 18:09 /etc/kubernetes/scheduler.conf
	
	I0507 11:10:21.066024   11681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/admin.conf
	I0507 11:10:21.069174   11681 kubeadm.go:162] "https://control-plane.minikube.internal:51264" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0507 11:10:21.069198   11681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0507 11:10:21.072463   11681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/kubelet.conf
	I0507 11:10:21.075786   11681 kubeadm.go:162] "https://control-plane.minikube.internal:51264" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0507 11:10:21.075808   11681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0507 11:10:21.078603   11681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/controller-manager.conf
	I0507 11:10:21.081232   11681 kubeadm.go:162] "https://control-plane.minikube.internal:51264" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0507 11:10:21.081250   11681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0507 11:10:21.084100   11681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/scheduler.conf
	I0507 11:10:21.086701   11681 kubeadm.go:162] "https://control-plane.minikube.internal:51264" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0507 11:10:21.086720   11681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0507 11:10:21.089229   11681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 11:10:21.093174   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:10:21.114835   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:10:21.476299   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:10:21.671197   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:10:21.697091   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:10:21.720818   11681 api_server.go:52] waiting for apiserver process to appear ...
	I0507 11:10:21.720907   11681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:10:22.223010   11681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:10:22.722934   11681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:10:23.221538   11681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:10:23.226016   11681 api_server.go:72] duration metric: took 1.505253125s to wait for apiserver process to appear ...
	I0507 11:10:23.226025   11681 api_server.go:88] waiting for apiserver healthz status ...
	I0507 11:10:23.226034   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:10:28.227994   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:10:28.228059   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:10:33.228299   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:10:33.228380   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:10:38.228973   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:10:38.229071   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:10:43.230078   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:10:43.230162   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:10:48.231558   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:10:48.231603   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:10:53.233386   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:10:53.233471   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:10:58.235555   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:10:58.235628   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:11:03.237972   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:11:03.238021   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:11:08.240234   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:11:08.240308   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:11:13.242677   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:11:13.242743   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:11:18.243465   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:11:18.243539   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:11:23.245930   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:11:23.246442   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:11:23.284459   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:11:23.284600   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:11:23.305113   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:11:23.305232   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:11:23.320244   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:11:23.320317   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:11:23.333125   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:11:23.333202   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:11:23.344147   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:11:23.344213   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:11:23.354898   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:11:23.354969   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:11:23.365184   11681 logs.go:276] 0 containers: []
	W0507 11:11:23.365198   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:11:23.365259   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:11:23.375702   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:11:23.375720   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:11:23.375726   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:11:23.413502   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:11:23.413516   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:11:23.438710   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:11:23.438718   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:11:23.442801   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:11:23.442807   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:11:23.456509   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:11:23.456522   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:11:23.471489   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:11:23.471499   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:11:23.482673   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:11:23.482683   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:11:23.498758   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:11:23.498771   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:11:23.510186   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:11:23.510196   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:11:23.528096   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:11:23.528109   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:11:23.604189   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:11:23.604199   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:11:23.630908   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:11:23.630918   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:11:23.647490   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:11:23.647502   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:11:23.658664   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:11:23.658676   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:11:23.671954   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:11:23.671968   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:11:23.693276   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:11:23.693287   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:11:26.207182   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:11:31.209424   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:11:31.209893   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:11:31.249978   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:11:31.250124   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:11:31.272302   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:11:31.272420   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:11:31.287865   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:11:31.287951   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:11:31.301657   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:11:31.301736   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:11:31.314265   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:11:31.314334   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:11:31.325131   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:11:31.325190   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:11:31.335599   11681 logs.go:276] 0 containers: []
	W0507 11:11:31.335613   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:11:31.335671   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:11:31.350220   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:11:31.350236   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:11:31.350241   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:11:31.363935   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:11:31.363948   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:11:31.376210   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:11:31.376222   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:11:31.402499   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:11:31.402508   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:11:31.440292   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:11:31.440303   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:11:31.476236   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:11:31.476253   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:11:31.515585   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:11:31.515596   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:11:31.530553   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:11:31.530565   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:11:31.542673   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:11:31.542686   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:11:31.560152   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:11:31.560165   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:11:31.572357   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:11:31.572368   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:11:31.576753   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:11:31.576761   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:11:31.590808   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:11:31.590819   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:11:31.604923   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:11:31.604932   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:11:31.616247   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:11:31.616257   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:11:31.631557   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:11:31.631565   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:11:34.148511   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:11:39.151084   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:11:39.151482   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:11:39.185437   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:11:39.185559   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:11:39.212869   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:11:39.212960   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:11:39.226465   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:11:39.226527   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:11:39.237458   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:11:39.237531   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:11:39.248119   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:11:39.248178   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:11:39.258471   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:11:39.258538   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:11:39.268730   11681 logs.go:276] 0 containers: []
	W0507 11:11:39.268738   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:11:39.268792   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:11:39.279280   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:11:39.279296   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:11:39.279302   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:11:39.317542   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:11:39.317551   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:11:39.329310   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:11:39.329322   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:11:39.340673   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:11:39.340686   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:11:39.352044   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:11:39.352057   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:11:39.363434   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:11:39.363448   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:11:39.377730   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:11:39.377741   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:11:39.414106   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:11:39.414117   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:11:39.428337   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:11:39.428346   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:11:39.443636   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:11:39.443648   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:11:39.458124   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:11:39.458134   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:11:39.475710   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:11:39.475720   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:11:39.501727   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:11:39.501740   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:11:39.513821   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:11:39.513830   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:11:39.518496   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:11:39.518505   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:11:39.542431   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:11:39.542441   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:11:42.058717   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:11:47.061354   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:11:47.061771   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:11:47.101328   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:11:47.101460   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:11:47.125542   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:11:47.125652   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:11:47.139770   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:11:47.139839   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:11:47.152364   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:11:47.152429   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:11:47.163037   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:11:47.163109   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:11:47.175005   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:11:47.175079   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:11:47.188945   11681 logs.go:276] 0 containers: []
	W0507 11:11:47.188954   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:11:47.189004   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:11:47.199581   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:11:47.199600   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:11:47.199606   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:11:47.237474   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:11:47.237485   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:11:47.261363   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:11:47.261375   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:11:47.280283   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:11:47.280295   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:11:47.292485   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:11:47.292496   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:11:47.308588   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:11:47.308598   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:11:47.320282   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:11:47.320293   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:11:47.324659   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:11:47.324666   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:11:47.342284   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:11:47.342293   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:11:47.353777   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:11:47.353787   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:11:47.365327   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:11:47.365338   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:11:47.379813   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:11:47.379824   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:11:47.405259   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:11:47.405265   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:11:47.425372   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:11:47.425384   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:11:47.436818   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:11:47.436831   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:11:47.473893   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:11:47.473903   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:11:49.990782   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:11:54.993284   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:11:54.993731   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:11:55.036126   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:11:55.036261   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:11:55.058532   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:11:55.058652   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:11:55.073935   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:11:55.074005   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:11:55.089908   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:11:55.089994   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:11:55.100518   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:11:55.100591   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:11:55.115348   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:11:55.115415   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:11:55.125817   11681 logs.go:276] 0 containers: []
	W0507 11:11:55.125830   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:11:55.125884   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:11:55.136508   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:11:55.136525   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:11:55.136531   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:11:55.140685   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:11:55.140693   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:11:55.154534   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:11:55.154548   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:11:55.171663   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:11:55.171674   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:11:55.187585   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:11:55.187597   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:11:55.205017   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:11:55.205027   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:11:55.244720   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:11:55.244730   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:11:55.258052   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:11:55.258066   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:11:55.269743   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:11:55.269755   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:11:55.305466   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:11:55.305481   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:11:55.321116   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:11:55.321128   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:11:55.332801   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:11:55.332811   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:11:55.358661   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:11:55.358671   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:11:55.372587   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:11:55.372599   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:11:55.396120   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:11:55.396134   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:11:55.413560   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:11:55.413573   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:11:57.926779   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:12:02.929090   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:12:02.929438   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:12:02.962615   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:12:02.962738   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:12:02.982903   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:12:02.982993   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:12:02.997264   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:12:02.997332   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:12:03.009281   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:12:03.009350   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:12:03.023676   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:12:03.023740   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:12:03.033970   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:12:03.034037   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:12:03.044053   11681 logs.go:276] 0 containers: []
	W0507 11:12:03.044065   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:12:03.044123   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:12:03.057688   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:12:03.057703   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:12:03.057708   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:12:03.069566   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:12:03.069576   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:12:03.084692   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:12:03.084705   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:12:03.096359   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:12:03.096372   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:12:03.107590   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:12:03.107600   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:12:03.133159   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:12:03.133170   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:12:03.146687   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:12:03.146697   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:12:03.170787   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:12:03.170805   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:12:03.208811   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:12:03.208819   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:12:03.243747   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:12:03.243759   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:12:03.263268   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:12:03.263278   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:12:03.279108   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:12:03.279118   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:12:03.296416   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:12:03.296426   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:12:03.311117   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:12:03.311129   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:12:03.322761   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:12:03.322771   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:12:03.327098   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:12:03.327107   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:12:05.842538   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:12:10.844706   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:12:10.844892   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:12:10.863533   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:12:10.863611   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:12:10.874309   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:12:10.874380   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:12:10.884883   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:12:10.884952   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:12:10.895612   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:12:10.895684   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:12:10.905913   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:12:10.905975   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:12:10.916250   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:12:10.916310   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:12:10.926097   11681 logs.go:276] 0 containers: []
	W0507 11:12:10.926105   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:12:10.926155   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:12:10.936624   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:12:10.936640   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:12:10.936645   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:12:10.952243   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:12:10.952253   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:12:10.977897   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:12:10.977904   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:12:10.989678   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:12:10.989689   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:12:11.027566   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:12:11.027576   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:12:11.062390   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:12:11.062403   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:12:11.076433   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:12:11.076442   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:12:11.087673   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:12:11.087684   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:12:11.092097   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:12:11.092103   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:12:11.103374   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:12:11.103385   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:12:11.122701   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:12:11.122713   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:12:11.133945   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:12:11.133955   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:12:11.145550   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:12:11.145559   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:12:11.173260   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:12:11.173269   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:12:11.187469   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:12:11.187478   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:12:11.201759   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:12:11.201770   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:12:13.721060   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:12:18.723745   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:12:18.724103   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:12:18.763544   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:12:18.763672   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:12:18.784012   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:12:18.784111   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:12:18.799501   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:12:18.799575   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:12:18.811981   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:12:18.812050   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:12:18.823047   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:12:18.823115   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:12:18.833977   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:12:18.834031   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:12:18.843798   11681 logs.go:276] 0 containers: []
	W0507 11:12:18.843810   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:12:18.843862   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:12:18.854250   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:12:18.854268   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:12:18.854275   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:12:18.868768   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:12:18.868779   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:12:18.886342   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:12:18.886352   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:12:18.905091   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:12:18.905101   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:12:18.929871   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:12:18.929879   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:12:18.942063   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:12:18.942075   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:12:18.946960   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:12:18.946969   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:12:18.982254   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:12:18.982269   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:12:18.996495   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:12:18.996505   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:12:19.008863   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:12:19.008873   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:12:19.033050   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:12:19.033062   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:12:19.047549   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:12:19.047559   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:12:19.085793   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:12:19.085812   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:12:19.102773   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:12:19.102782   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:12:19.118082   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:12:19.118098   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:12:19.129578   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:12:19.129591   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:12:21.645205   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:12:26.647718   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:12:26.648015   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:12:26.670991   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:12:26.671067   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:12:26.693438   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:12:26.693506   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:12:26.708180   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:12:26.708251   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:12:26.725864   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:12:26.725937   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:12:26.736924   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:12:26.736991   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:12:26.748426   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:12:26.748489   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:12:26.759367   11681 logs.go:276] 0 containers: []
	W0507 11:12:26.759378   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:12:26.759436   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:12:26.770266   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:12:26.770285   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:12:26.770291   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:12:26.784635   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:12:26.784646   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:12:26.796566   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:12:26.796577   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:12:26.811760   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:12:26.811772   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:12:26.826481   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:12:26.826494   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:12:26.851091   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:12:26.851100   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:12:26.890484   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:12:26.890494   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:12:26.915214   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:12:26.915226   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:12:26.929386   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:12:26.929397   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:12:26.941453   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:12:26.941468   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:12:26.957120   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:12:26.957132   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:12:26.977078   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:12:26.977088   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:12:27.014844   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:12:27.014852   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:12:27.019581   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:12:27.019590   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:12:27.038130   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:12:27.038140   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:12:27.050030   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:12:27.050040   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:12:29.567221   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:12:34.569697   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:12:34.569879   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:12:34.581332   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:12:34.581407   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:12:34.592569   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:12:34.592635   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:12:34.603273   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:12:34.603338   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:12:34.614135   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:12:34.614199   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:12:34.625164   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:12:34.625224   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:12:34.635894   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:12:34.635957   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:12:34.646503   11681 logs.go:276] 0 containers: []
	W0507 11:12:34.646519   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:12:34.646577   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:12:34.657396   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:12:34.657415   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:12:34.657420   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:12:34.672189   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:12:34.672199   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:12:34.687807   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:12:34.687819   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:12:34.711895   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:12:34.711904   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:12:34.723858   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:12:34.723869   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:12:34.728370   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:12:34.728379   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:12:34.740068   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:12:34.740080   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:12:34.751878   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:12:34.751888   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:12:34.769477   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:12:34.769487   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:12:34.781860   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:12:34.781872   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:12:34.796185   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:12:34.796198   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:12:34.831641   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:12:34.831651   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:12:34.847263   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:12:34.847272   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:12:34.885040   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:12:34.885050   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:12:34.914079   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:12:34.914089   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:12:34.926316   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:12:34.926327   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:12:37.442117   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:12:42.444262   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:12:42.444490   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:12:42.465976   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:12:42.466077   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:12:42.481547   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:12:42.481615   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:12:42.501441   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:12:42.501516   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:12:42.512089   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:12:42.512163   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:12:42.522915   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:12:42.522980   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:12:42.533481   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:12:42.533542   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:12:42.544913   11681 logs.go:276] 0 containers: []
	W0507 11:12:42.544923   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:12:42.544974   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:12:42.555001   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:12:42.555020   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:12:42.555026   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:12:42.569802   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:12:42.569815   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:12:42.587115   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:12:42.587125   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:12:42.602234   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:12:42.602243   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:12:42.622823   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:12:42.622833   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:12:42.635348   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:12:42.635361   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:12:42.659955   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:12:42.659963   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:12:42.698083   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:12:42.698096   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:12:42.702855   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:12:42.702862   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:12:42.727196   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:12:42.727209   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:12:42.740008   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:12:42.740022   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:12:42.754099   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:12:42.754112   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:12:42.769614   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:12:42.769627   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:12:42.781269   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:12:42.781281   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:12:42.796713   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:12:42.796725   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:12:42.830981   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:12:42.830994   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:12:45.347888   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:12:50.350064   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:12:50.350237   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:12:50.361861   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:12:50.361935   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:12:50.374491   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:12:50.374564   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:12:50.388428   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:12:50.388502   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:12:50.399313   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:12:50.399385   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:12:50.410865   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:12:50.410945   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:12:50.421897   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:12:50.421988   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:12:50.432306   11681 logs.go:276] 0 containers: []
	W0507 11:12:50.432318   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:12:50.432381   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:12:50.442718   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:12:50.442735   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:12:50.442740   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:12:50.454814   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:12:50.454825   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:12:50.472658   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:12:50.472668   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:12:50.490643   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:12:50.490654   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:12:50.504773   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:12:50.504787   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:12:50.516944   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:12:50.516959   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:12:50.531697   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:12:50.531707   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:12:50.568554   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:12:50.568566   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:12:50.582575   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:12:50.582586   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:12:50.596506   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:12:50.596518   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:12:50.613419   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:12:50.613431   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:12:50.625182   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:12:50.625196   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:12:50.629841   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:12:50.629850   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:12:50.663993   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:12:50.664010   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:12:50.688834   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:12:50.688845   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:12:50.703149   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:12:50.703160   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:12:53.228007   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:12:58.229606   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:12:58.229826   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:12:58.247648   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:12:58.247739   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:12:58.260924   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:12:58.261010   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:12:58.271953   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:12:58.272023   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:12:58.282154   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:12:58.282225   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:12:58.292700   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:12:58.292775   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:12:58.303390   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:12:58.303465   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:12:58.313300   11681 logs.go:276] 0 containers: []
	W0507 11:12:58.313309   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:12:58.313363   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:12:58.324076   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:12:58.324092   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:12:58.324097   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:12:58.362076   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:12:58.362087   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:12:58.376832   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:12:58.376841   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:12:58.391052   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:12:58.391065   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:12:58.405398   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:12:58.405407   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:12:58.428899   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:12:58.428908   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:12:58.453586   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:12:58.453599   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:12:58.465442   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:12:58.465454   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:12:58.480743   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:12:58.480753   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:12:58.495380   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:12:58.495396   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:12:58.500212   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:12:58.500220   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:12:58.511354   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:12:58.511363   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:12:58.527228   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:12:58.527238   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:12:58.538972   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:12:58.538983   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:12:58.556874   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:12:58.556885   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:12:58.591600   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:12:58.591611   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:13:01.105324   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:06.107342   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:06.107529   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:13:06.120617   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:13:06.120698   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:13:06.131820   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:13:06.131893   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:13:06.143465   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:13:06.143542   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:13:06.155372   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:13:06.155451   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:13:06.167895   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:13:06.167970   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:13:06.179589   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:13:06.179670   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:13:06.190384   11681 logs.go:276] 0 containers: []
	W0507 11:13:06.190398   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:13:06.190458   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:13:06.201065   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:13:06.201082   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:13:06.201087   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:13:06.242927   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:13:06.242948   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:13:06.260308   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:13:06.260322   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:13:06.278695   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:13:06.278712   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:13:06.283330   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:13:06.283341   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:13:06.298936   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:13:06.298948   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:13:06.312044   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:13:06.312055   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:13:06.330572   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:13:06.330588   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:13:06.356963   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:13:06.356981   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:13:06.380515   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:13:06.380528   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:13:06.416962   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:13:06.416975   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:13:06.431383   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:13:06.431395   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:13:06.444486   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:13:06.444499   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:13:06.456198   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:13:06.456211   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:13:06.471141   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:13:06.471152   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:13:06.495514   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:13:06.495524   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:13:09.008292   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:14.010865   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:14.011103   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:13:14.030841   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:13:14.030932   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:13:14.044966   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:13:14.045044   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:13:14.056904   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:13:14.056976   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:13:14.067444   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:13:14.067512   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:13:14.078117   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:13:14.078181   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:13:14.088459   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:13:14.088524   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:13:14.098235   11681 logs.go:276] 0 containers: []
	W0507 11:13:14.098246   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:13:14.098300   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:13:14.110683   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:13:14.110700   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:13:14.110706   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:13:14.122710   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:13:14.122724   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:13:14.161106   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:13:14.161113   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:13:14.174680   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:13:14.174692   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:13:14.198328   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:13:14.198340   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:13:14.213800   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:13:14.213811   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:13:14.225279   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:13:14.225291   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:13:14.242607   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:13:14.242616   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:13:14.256656   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:13:14.256666   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:13:14.268403   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:13:14.268413   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:13:14.291566   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:13:14.291575   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:13:14.296314   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:13:14.296323   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:13:14.335781   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:13:14.335793   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:13:14.354303   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:13:14.354314   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:13:14.366052   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:13:14.366065   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:13:14.381022   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:13:14.381034   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:13:16.893555   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:21.896085   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:21.896442   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:13:21.929086   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:13:21.929215   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:13:21.949545   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:13:21.949630   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:13:21.962823   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:13:21.962891   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:13:21.974614   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:13:21.974677   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:13:21.984913   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:13:21.984986   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:13:21.995159   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:13:21.995225   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:13:22.008225   11681 logs.go:276] 0 containers: []
	W0507 11:13:22.008237   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:13:22.008299   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:13:22.018584   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:13:22.018602   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:13:22.018608   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:13:22.036247   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:13:22.036258   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:13:22.047892   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:13:22.047904   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:13:22.086729   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:13:22.086737   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:13:22.091406   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:13:22.091415   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:13:22.116683   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:13:22.116693   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:13:22.133265   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:13:22.133275   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:13:22.146688   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:13:22.146701   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:13:22.161580   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:13:22.161591   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:13:22.172326   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:13:22.172338   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:13:22.207429   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:13:22.207442   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:13:22.231138   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:13:22.231147   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:13:22.245724   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:13:22.245734   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:13:22.257629   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:13:22.257642   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:13:22.276025   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:13:22.276036   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:13:22.287641   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:13:22.287654   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:13:24.804356   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:29.806824   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:29.807039   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:13:29.822948   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:13:29.823017   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:13:29.833942   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:13:29.834012   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:13:29.846960   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:13:29.847031   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:13:29.857664   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:13:29.857728   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:13:29.870216   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:13:29.870284   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:13:29.881100   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:13:29.881158   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:13:29.892160   11681 logs.go:276] 0 containers: []
	W0507 11:13:29.892169   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:13:29.892226   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:13:29.902697   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:13:29.902713   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:13:29.902718   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:13:29.919968   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:13:29.919977   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:13:29.931541   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:13:29.931550   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:13:29.956412   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:13:29.956419   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:13:29.960746   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:13:29.960756   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:13:29.994769   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:13:29.994779   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:13:30.008905   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:13:30.008919   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:13:30.023920   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:13:30.023928   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:13:30.039822   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:13:30.039836   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:13:30.051722   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:13:30.051735   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:13:30.090498   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:13:30.090505   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:13:30.115746   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:13:30.115757   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:13:30.131954   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:13:30.131963   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:13:30.143745   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:13:30.143755   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:13:30.158675   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:13:30.158685   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:13:30.179750   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:13:30.179760   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:13:32.700623   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:37.711034   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:37.711399   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:13:37.746574   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:13:37.746704   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:13:37.766121   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:13:37.766207   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:13:37.780665   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:13:37.780743   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:13:37.792719   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:13:37.792784   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:13:37.808995   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:13:37.809056   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:13:37.819659   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:13:37.819727   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:13:37.829767   11681 logs.go:276] 0 containers: []
	W0507 11:13:37.829777   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:13:37.829828   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:13:37.840114   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:13:37.840130   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:13:37.840135   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:13:37.851657   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:13:37.851670   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:13:37.875786   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:13:37.875797   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:13:37.880120   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:13:37.880127   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:13:37.914675   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:13:37.914686   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:13:37.929789   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:13:37.929804   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:13:37.940822   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:13:37.940836   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:13:37.952579   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:13:37.952589   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:13:37.967459   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:13:37.967469   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:13:37.979733   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:13:37.979744   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:13:37.995429   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:13:37.995439   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:13:38.009437   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:13:38.009446   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:13:38.033493   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:13:38.033504   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:13:38.050331   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:13:38.050340   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:13:38.067508   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:13:38.067518   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:13:38.079192   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:13:38.079201   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:13:40.622633   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:45.629608   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:45.629698   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:13:45.641446   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:13:45.641518   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:13:45.651924   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:13:45.651991   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:13:45.664269   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:13:45.664341   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:13:45.674945   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:13:45.675010   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:13:45.686485   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:13:45.686561   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:13:45.697148   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:13:45.697218   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:13:45.708064   11681 logs.go:276] 0 containers: []
	W0507 11:13:45.708074   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:13:45.708131   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:13:45.719363   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:13:45.719380   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:13:45.719385   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:13:45.761110   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:13:45.761120   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:13:45.765304   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:13:45.765310   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:13:45.779097   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:13:45.779111   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:13:45.809519   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:13:45.809532   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:13:45.823717   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:13:45.823728   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:13:45.838160   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:13:45.838174   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:13:45.872283   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:13:45.872294   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:13:45.884495   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:13:45.884507   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:13:45.899423   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:13:45.899436   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:13:45.918282   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:13:45.918294   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:13:45.933466   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:13:45.933475   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:13:45.944874   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:13:45.944883   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:13:45.957497   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:13:45.957507   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:13:45.974555   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:13:45.974566   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:13:45.986172   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:13:45.986183   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:13:48.513218   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:53.518203   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:53.518331   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:13:53.529626   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:13:53.529697   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:13:53.540960   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:13:53.541031   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:13:53.556247   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:13:53.556316   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:13:53.566922   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:13:53.566987   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:13:53.577544   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:13:53.577620   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:13:53.588642   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:13:53.588711   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:13:53.599425   11681 logs.go:276] 0 containers: []
	W0507 11:13:53.599437   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:13:53.599492   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:13:53.611024   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:13:53.611043   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:13:53.611053   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:13:53.635736   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:13:53.635757   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:13:53.676336   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:13:53.676356   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:13:53.718682   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:13:53.718699   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:13:53.734442   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:13:53.734453   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:13:53.750885   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:13:53.750897   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:13:53.766081   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:13:53.766095   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:13:53.791771   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:13:53.791785   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:13:53.812758   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:13:53.812778   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:13:53.828814   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:13:53.828828   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:13:53.833888   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:13:53.833900   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:13:53.848749   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:13:53.848762   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:13:53.861078   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:13:53.861091   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:13:53.873386   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:13:53.873397   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:13:53.891521   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:13:53.891532   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:13:53.903225   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:13:53.903236   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:13:56.420120   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:01.424025   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:01.424273   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:14:01.448593   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:14:01.448712   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:14:01.465655   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:14:01.465741   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:14:01.478494   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:14:01.478567   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:14:01.489436   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:14:01.489506   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:14:01.499879   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:14:01.499953   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:14:01.510356   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:14:01.510429   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:14:01.522317   11681 logs.go:276] 0 containers: []
	W0507 11:14:01.522326   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:14:01.522385   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:14:01.533045   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:14:01.533062   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:14:01.533067   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:14:01.547288   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:14:01.547299   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:14:01.564973   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:14:01.564984   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:14:01.583669   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:14:01.583680   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:14:01.598746   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:14:01.598757   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:14:01.609926   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:14:01.609938   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:14:01.633937   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:14:01.633948   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:14:01.652973   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:14:01.652984   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:14:01.668174   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:14:01.668183   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:14:01.679396   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:14:01.679406   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:14:01.702080   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:14:01.702092   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:14:01.740917   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:14:01.740925   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:14:01.745510   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:14:01.745520   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:14:01.758807   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:14:01.758823   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:14:01.770796   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:14:01.770808   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:14:01.806678   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:14:01.806691   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:14:04.325066   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:09.328231   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:09.328343   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:14:09.339505   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:14:09.339578   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:14:09.349514   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:14:09.349590   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:14:09.360748   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:14:09.360826   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:14:09.371197   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:14:09.371269   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:14:09.381810   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:14:09.381880   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:14:09.391756   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:14:09.391825   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:14:09.401904   11681 logs.go:276] 0 containers: []
	W0507 11:14:09.401915   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:14:09.401969   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:14:09.412122   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:14:09.412139   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:14:09.412146   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:14:09.430049   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:14:09.430059   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:14:09.446929   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:14:09.446940   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:14:09.451378   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:14:09.451385   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:14:09.462467   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:14:09.462478   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:14:09.474058   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:14:09.474070   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:14:09.486000   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:14:09.486011   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:14:09.510147   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:14:09.510158   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:14:09.523971   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:14:09.523982   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:14:09.563384   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:14:09.563394   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:14:09.587370   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:14:09.587381   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:14:09.601195   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:14:09.601208   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:14:09.613290   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:14:09.613303   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:14:09.628765   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:14:09.628779   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:14:09.668442   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:14:09.668455   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:14:09.683592   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:14:09.683604   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:14:12.203595   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:17.206420   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:17.206620   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:14:17.218318   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:14:17.218400   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:14:17.232213   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:14:17.232292   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:14:17.243076   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:14:17.243146   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:14:17.253769   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:14:17.253837   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:14:17.264167   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:14:17.264235   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:14:17.275562   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:14:17.275631   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:14:17.285753   11681 logs.go:276] 0 containers: []
	W0507 11:14:17.285762   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:14:17.285815   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:14:17.296342   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:14:17.296359   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:14:17.296365   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:14:17.334709   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:14:17.334718   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:14:17.373795   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:14:17.373806   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:14:17.378769   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:14:17.378778   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:14:17.390483   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:14:17.390495   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:14:17.408648   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:14:17.408659   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:14:17.423288   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:14:17.423300   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:14:17.434792   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:14:17.434803   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:14:17.458475   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:14:17.458483   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:14:17.470537   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:14:17.470548   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:14:17.491400   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:14:17.491411   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:14:17.504359   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:14:17.504371   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:14:17.543843   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:14:17.543852   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:14:17.558623   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:14:17.558637   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:14:17.584127   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:14:17.584140   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:14:17.597832   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:14:17.597843   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:14:20.114962   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:25.117524   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:25.117614   11681 kubeadm.go:591] duration metric: took 4m4.150827667s to restartPrimaryControlPlane
	W0507 11:14:25.117693   11681 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0507 11:14:25.117725   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0507 11:14:26.124627   11681 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0068555s)
	I0507 11:14:26.124707   11681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 11:14:26.129633   11681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 11:14:26.132426   11681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 11:14:26.135103   11681 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 11:14:26.135109   11681 kubeadm.go:156] found existing configuration files:
	
	I0507 11:14:26.135134   11681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/admin.conf
	I0507 11:14:26.138319   11681 kubeadm.go:162] "https://control-plane.minikube.internal:51264" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0507 11:14:26.138346   11681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0507 11:14:26.141291   11681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/kubelet.conf
	I0507 11:14:26.143658   11681 kubeadm.go:162] "https://control-plane.minikube.internal:51264" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0507 11:14:26.143678   11681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0507 11:14:26.146806   11681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/controller-manager.conf
	I0507 11:14:26.149786   11681 kubeadm.go:162] "https://control-plane.minikube.internal:51264" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0507 11:14:26.149811   11681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0507 11:14:26.152522   11681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/scheduler.conf
	I0507 11:14:26.155050   11681 kubeadm.go:162] "https://control-plane.minikube.internal:51264" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0507 11:14:26.155069   11681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0507 11:14:26.158085   11681 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0507 11:14:26.175140   11681 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0507 11:14:26.175184   11681 kubeadm.go:309] [preflight] Running pre-flight checks
	I0507 11:14:26.224106   11681 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0507 11:14:26.224157   11681 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0507 11:14:26.224335   11681 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0507 11:14:26.274159   11681 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0507 11:14:26.278327   11681 out.go:204]   - Generating certificates and keys ...
	I0507 11:14:26.278360   11681 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0507 11:14:26.278397   11681 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0507 11:14:26.278437   11681 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0507 11:14:26.278501   11681 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0507 11:14:26.278540   11681 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0507 11:14:26.278575   11681 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0507 11:14:26.278606   11681 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0507 11:14:26.278657   11681 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0507 11:14:26.278696   11681 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0507 11:14:26.278730   11681 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0507 11:14:26.278747   11681 kubeadm.go:309] [certs] Using the existing "sa" key
	I0507 11:14:26.278770   11681 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0507 11:14:26.432583   11681 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0507 11:14:26.542704   11681 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0507 11:14:26.636874   11681 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0507 11:14:26.714705   11681 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0507 11:14:26.743434   11681 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 11:14:26.744236   11681 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 11:14:26.744258   11681 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0507 11:14:26.827669   11681 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0507 11:14:26.831731   11681 out.go:204]   - Booting up control plane ...
	I0507 11:14:26.831780   11681 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0507 11:14:26.831843   11681 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0507 11:14:26.831900   11681 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0507 11:14:26.831965   11681 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0507 11:14:26.832153   11681 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0507 11:14:30.832263   11681 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.001819 seconds
	I0507 11:14:30.832321   11681 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0507 11:14:30.836115   11681 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0507 11:14:31.344574   11681 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0507 11:14:31.344684   11681 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-776000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0507 11:14:31.848095   11681 kubeadm.go:309] [bootstrap-token] Using token: bxylxf.1yjazcthzjr0b14w
	I0507 11:14:31.853843   11681 out.go:204]   - Configuring RBAC rules ...
	I0507 11:14:31.853906   11681 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0507 11:14:31.853955   11681 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0507 11:14:31.855869   11681 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0507 11:14:31.860500   11681 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0507 11:14:31.861672   11681 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0507 11:14:31.862432   11681 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0507 11:14:31.865462   11681 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0507 11:14:32.052551   11681 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0507 11:14:32.252414   11681 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0507 11:14:32.252919   11681 kubeadm.go:309] 
	I0507 11:14:32.252951   11681 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0507 11:14:32.252956   11681 kubeadm.go:309] 
	I0507 11:14:32.252989   11681 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0507 11:14:32.252992   11681 kubeadm.go:309] 
	I0507 11:14:32.253003   11681 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0507 11:14:32.253060   11681 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0507 11:14:32.253090   11681 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0507 11:14:32.253093   11681 kubeadm.go:309] 
	I0507 11:14:32.253128   11681 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0507 11:14:32.253134   11681 kubeadm.go:309] 
	I0507 11:14:32.253170   11681 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0507 11:14:32.253191   11681 kubeadm.go:309] 
	I0507 11:14:32.253218   11681 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0507 11:14:32.253253   11681 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0507 11:14:32.253362   11681 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0507 11:14:32.253378   11681 kubeadm.go:309] 
	I0507 11:14:32.253455   11681 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0507 11:14:32.253498   11681 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0507 11:14:32.253501   11681 kubeadm.go:309] 
	I0507 11:14:32.253545   11681 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bxylxf.1yjazcthzjr0b14w \
	I0507 11:14:32.253596   11681 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f29a4f73b331d4cf393206b18522ccd48b2106136bcb0164e83081b123d8ccc \
	I0507 11:14:32.253609   11681 kubeadm.go:309] 	--control-plane 
	I0507 11:14:32.253611   11681 kubeadm.go:309] 
	I0507 11:14:32.253686   11681 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0507 11:14:32.253693   11681 kubeadm.go:309] 
	I0507 11:14:32.253734   11681 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bxylxf.1yjazcthzjr0b14w \
	I0507 11:14:32.253811   11681 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f29a4f73b331d4cf393206b18522ccd48b2106136bcb0164e83081b123d8ccc 
	I0507 11:14:32.253903   11681 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0507 11:14:32.253913   11681 cni.go:84] Creating CNI manager for ""
	I0507 11:14:32.253923   11681 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:14:32.258222   11681 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0507 11:14:32.265219   11681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0507 11:14:32.268219   11681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0507 11:14:32.273694   11681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0507 11:14:32.273777   11681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-776000 minikube.k8s.io/updated_at=2024_05_07T11_14_32_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=running-upgrade-776000 minikube.k8s.io/primary=true
	I0507 11:14:32.273778   11681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 11:14:32.277202   11681 ops.go:34] apiserver oom_adj: -16
	I0507 11:14:32.307154   11681 kubeadm.go:1107] duration metric: took 33.413375ms to wait for elevateKubeSystemPrivileges
	W0507 11:14:32.325456   11681 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0507 11:14:32.325466   11681 kubeadm.go:393] duration metric: took 4m11.372901417s to StartCluster
	I0507 11:14:32.325476   11681 settings.go:142] acquiring lock: {Name:mk50bfcfedcd3b99aacdbeb1994dffd265fa3e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:14:32.325652   11681 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:14:32.326040   11681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/kubeconfig: {Name:mk2a7794036857fd378216b160722b418b125ac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:14:32.326261   11681 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:14:32.330199   11681 out.go:177] * Verifying Kubernetes components...
	I0507 11:14:32.326348   11681 config.go:182] Loaded profile config "running-upgrade-776000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:14:32.326332   11681 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0507 11:14:32.338203   11681 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-776000"
	I0507 11:14:32.338218   11681 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-776000"
	W0507 11:14:32.338221   11681 addons.go:243] addon storage-provisioner should already be in state true
	I0507 11:14:32.338234   11681 host.go:66] Checking if "running-upgrade-776000" exists ...
	I0507 11:14:32.338264   11681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:14:32.338281   11681 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-776000"
	I0507 11:14:32.338292   11681 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-776000"
	I0507 11:14:32.339271   11681 kapi.go:59] client config for running-upgrade-776000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/client.key", CAFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101d4bd80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 11:14:32.339391   11681 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-776000"
	W0507 11:14:32.339396   11681 addons.go:243] addon default-storageclass should already be in state true
	I0507 11:14:32.339404   11681 host.go:66] Checking if "running-upgrade-776000" exists ...
	I0507 11:14:32.343119   11681 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:14:32.346188   11681 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 11:14:32.346195   11681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0507 11:14:32.346200   11681 sshutil.go:53] new ssh client: &{IP:localhost Port:51232 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/running-upgrade-776000/id_rsa Username:docker}
	I0507 11:14:32.346719   11681 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0507 11:14:32.346725   11681 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0507 11:14:32.346729   11681 sshutil.go:53] new ssh client: &{IP:localhost Port:51232 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/running-upgrade-776000/id_rsa Username:docker}
	I0507 11:14:32.430304   11681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 11:14:32.435434   11681 api_server.go:52] waiting for apiserver process to appear ...
	I0507 11:14:32.435479   11681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:14:32.439158   11681 api_server.go:72] duration metric: took 112.885417ms to wait for apiserver process to appear ...
	I0507 11:14:32.439167   11681 api_server.go:88] waiting for apiserver healthz status ...
	I0507 11:14:32.439174   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:32.445094   11681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0507 11:14:32.515859   11681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 11:14:37.441286   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:37.441351   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:42.441695   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:42.441723   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:47.441984   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:47.442024   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:52.442861   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:52.442885   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:57.443449   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:57.443476   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:02.444248   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:02.444299   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0507 11:15:02.788430   11681 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0507 11:15:02.792759   11681 out.go:177] * Enabled addons: storage-provisioner
	I0507 11:15:02.805772   11681 addons.go:505] duration metric: took 30.479836417s for enable addons: enabled=[storage-provisioner]
	I0507 11:15:07.445290   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:07.445329   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:12.446565   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:12.446615   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:17.448223   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:17.448255   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:22.449976   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:22.450019   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:27.450353   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:27.450399   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:32.452569   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:32.452729   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:32.463290   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:15:32.463362   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:32.475133   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:15:32.475209   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:32.485618   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:15:32.485693   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:32.495995   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:15:32.496069   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:32.506802   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:15:32.506873   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:32.517450   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:15:32.517520   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:32.527511   11681 logs.go:276] 0 containers: []
	W0507 11:15:32.527524   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:32.527582   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:32.537965   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:15:32.537980   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:32.537986   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:32.562156   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:15:32.562168   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:32.579744   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:15:32.579758   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:15:32.591200   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:32.591211   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:32.596281   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:32.596289   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:32.633295   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:15:32.633307   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:15:32.649191   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:15:32.649201   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:15:32.662917   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:15:32.662928   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:15:32.674653   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:15:32.674663   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:15:32.686371   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:15:32.686385   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:15:32.702296   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:32.702309   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:32.737776   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:15:32.737787   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:15:32.755726   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:15:32.755739   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:15:35.269694   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:40.270446   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:40.270583   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:40.284336   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:15:40.284412   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:40.294850   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:15:40.294921   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:40.305234   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:15:40.305299   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:40.315529   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:15:40.315600   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:40.326866   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:15:40.326934   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:40.336974   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:15:40.337046   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:40.347565   11681 logs.go:276] 0 containers: []
	W0507 11:15:40.347580   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:40.347636   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:40.358722   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:15:40.358736   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:15:40.358741   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:15:40.376280   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:40.376291   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:40.401803   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:15:40.401812   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:40.413388   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:15:40.413400   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:15:40.424938   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:15:40.424951   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:15:40.443540   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:15:40.443553   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:15:40.455680   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:15:40.455691   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:15:40.469858   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:15:40.469867   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:15:40.483995   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:15:40.484009   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:15:40.495680   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:15:40.495691   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:15:40.507987   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:40.507998   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:40.542924   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:40.542935   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:40.548060   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:40.548068   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:43.087436   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:48.089929   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:48.090267   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:48.122230   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:15:48.122346   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:48.141657   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:15:48.141740   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:48.154811   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:15:48.154886   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:48.166600   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:15:48.166664   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:48.177344   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:15:48.177424   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:48.187673   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:15:48.187739   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:48.198187   11681 logs.go:276] 0 containers: []
	W0507 11:15:48.198199   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:48.198262   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:48.208713   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:15:48.208728   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:15:48.208733   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:15:48.220239   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:15:48.220249   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:15:48.242931   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:48.242942   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:48.278317   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:48.278325   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:48.313738   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:15:48.313753   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:15:48.327749   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:15:48.327761   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:15:48.341715   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:15:48.341726   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:15:48.353454   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:15:48.353464   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:15:48.364777   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:15:48.364789   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:15:48.376223   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:48.376233   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:48.380997   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:15:48.381006   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:15:48.395771   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:48.395781   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:48.420811   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:15:48.420822   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:50.934538   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:55.936697   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:55.936846   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:55.950215   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:15:55.950294   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:55.962247   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:15:55.962319   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:55.972497   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:15:55.972556   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:55.983038   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:15:55.983108   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:55.993900   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:15:55.993970   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:56.004298   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:15:56.004355   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:56.014648   11681 logs.go:276] 0 containers: []
	W0507 11:15:56.014658   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:56.014710   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:56.025400   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:15:56.025416   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:56.025421   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:56.030160   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:15:56.030167   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:15:56.041404   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:15:56.041418   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:56.053644   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:15:56.053656   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:15:56.067767   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:15:56.067778   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:15:56.081832   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:15:56.081844   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:15:56.096016   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:15:56.096028   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:15:56.107414   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:15:56.107426   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:15:56.125398   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:56.125412   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:56.160474   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:56.160486   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:56.194574   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:15:56.194587   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:15:56.212177   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:15:56.212190   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:15:56.223482   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:56.223495   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:58.749358   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:03.749533   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:03.749729   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:03.766300   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:03.766386   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:03.779374   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:03.779448   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:03.790785   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:16:03.790855   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:03.802155   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:03.802224   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:03.813012   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:03.813093   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:03.823937   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:03.824004   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:03.833722   11681 logs.go:276] 0 containers: []
	W0507 11:16:03.833738   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:03.833793   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:03.844818   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:03.844833   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:03.844838   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:03.860129   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:03.860147   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:03.872313   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:03.872324   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:03.889872   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:03.889884   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:03.901591   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:03.901602   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:03.913666   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:03.913677   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:03.952116   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:03.952124   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:03.966407   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:03.966419   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:03.980130   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:03.980146   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:03.992084   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:03.992096   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:04.015215   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:04.015225   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:04.019514   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:04.019521   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:04.053188   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:04.053201   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:06.567326   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:11.569806   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:11.570071   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:11.595859   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:11.595977   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:11.614226   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:11.614298   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:11.627913   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:16:11.627984   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:11.640515   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:11.640584   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:11.655631   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:11.655706   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:11.666354   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:11.666428   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:11.676820   11681 logs.go:276] 0 containers: []
	W0507 11:16:11.676832   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:11.676889   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:11.686946   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:11.686960   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:11.686967   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:11.720793   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:11.720804   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:11.725910   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:11.725919   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:11.743835   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:11.743845   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:11.762824   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:11.762836   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:11.777006   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:11.777019   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:11.789765   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:11.789776   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:11.801026   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:11.801037   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:11.835195   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:11.835208   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:11.849335   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:11.849347   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:11.861723   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:11.861734   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:11.873386   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:11.873396   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:11.891254   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:11.891264   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:14.417355   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:19.419580   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:19.419770   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:19.442603   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:19.442691   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:19.458053   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:19.458121   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:19.470282   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:16:19.470342   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:19.481933   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:19.482003   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:19.492531   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:19.492600   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:19.503051   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:19.503112   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:19.516104   11681 logs.go:276] 0 containers: []
	W0507 11:16:19.516114   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:19.516172   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:19.526129   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:19.526145   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:19.526151   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:19.559946   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:19.559962   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:19.574371   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:19.574383   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:19.585731   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:19.585743   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:19.597949   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:19.597959   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:19.609152   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:19.609163   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:19.621143   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:19.621154   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:19.645572   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:19.645580   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:19.680688   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:19.680698   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:19.685496   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:19.685504   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:19.699108   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:19.699118   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:19.710791   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:19.710804   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:19.725386   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:19.725398   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:22.245456   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:27.247650   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:27.247775   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:27.260083   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:27.260157   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:27.271563   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:27.271633   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:27.286019   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:16:27.286094   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:27.296520   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:27.296593   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:27.308112   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:27.308178   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:27.318550   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:27.318624   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:27.328451   11681 logs.go:276] 0 containers: []
	W0507 11:16:27.328461   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:27.328522   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:27.339117   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:27.339133   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:27.339139   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:27.373418   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:27.373429   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:27.387444   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:27.387454   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:27.399442   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:27.399454   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:27.411089   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:27.411102   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:27.434009   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:27.434016   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:27.445224   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:27.445234   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:27.479179   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:27.479189   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:27.483790   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:27.483799   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:27.502574   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:27.502587   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:27.514097   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:27.514109   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:27.530901   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:27.530913   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:27.548754   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:27.548771   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:30.072780   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:35.074856   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:35.074968   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:35.088573   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:35.088641   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:35.099273   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:35.099343   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:35.109466   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:16:35.109537   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:35.122377   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:35.122445   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:35.132796   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:35.132860   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:35.142956   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:35.143022   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:35.153313   11681 logs.go:276] 0 containers: []
	W0507 11:16:35.153327   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:35.153382   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:35.163300   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:35.163313   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:35.163319   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:35.198788   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:35.198802   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:35.213211   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:35.213224   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:35.227068   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:35.227081   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:35.239172   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:35.239181   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:35.251511   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:35.251522   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:35.262785   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:35.262798   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:35.286255   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:35.286263   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:35.318866   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:35.318874   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:35.322999   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:35.323006   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:35.334569   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:35.334579   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:35.349293   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:35.349302   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:35.366736   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:35.366749   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:37.880271   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:42.882574   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:42.882799   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:42.904980   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:42.905084   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:42.919461   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:42.919538   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:42.930381   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:16:42.930448   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:42.940305   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:42.940373   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:42.951010   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:42.951084   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:42.961308   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:42.961377   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:42.971247   11681 logs.go:276] 0 containers: []
	W0507 11:16:42.971257   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:42.971313   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:42.981754   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:42.981769   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:42.981774   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:42.986217   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:42.986242   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:43.023730   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:43.023743   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:43.038828   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:43.038838   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:43.050171   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:43.050181   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:43.062158   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:43.062172   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:43.076779   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:43.076789   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:43.095456   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:43.095468   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:43.130577   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:43.130587   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:43.142163   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:43.142176   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:43.167332   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:43.167339   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:43.178648   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:43.178661   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:43.196061   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:43.196071   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:45.713870   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:50.716111   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:50.716330   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:50.742638   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:50.742759   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:50.760249   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:50.760334   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:50.773520   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:16:50.773593   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:50.785229   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:50.785291   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:50.795385   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:50.795451   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:50.805963   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:50.806040   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:50.815897   11681 logs.go:276] 0 containers: []
	W0507 11:16:50.815908   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:50.815964   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:50.826530   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:50.826546   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:50.826551   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:50.838056   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:50.838069   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:50.871981   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:50.871997   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:50.876773   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:50.876780   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:50.889264   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:50.889276   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:50.905132   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:50.905144   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:50.930452   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:16:50.930464   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:16:50.947388   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:16:50.947400   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:16:50.959607   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:50.959617   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:50.996093   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:50.996107   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:51.011162   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:51.011173   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:51.023064   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:51.023074   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:51.040404   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:51.040414   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:51.052265   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:51.052275   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:51.066874   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:51.066891   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:53.582125   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:58.584385   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:58.584589   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:58.613329   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:58.613447   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:58.630453   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:58.630547   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:58.644183   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:16:58.644258   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:58.654925   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:58.654994   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:58.665601   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:58.665659   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:58.675830   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:58.675896   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:58.688713   11681 logs.go:276] 0 containers: []
	W0507 11:16:58.688723   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:58.688783   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:58.699195   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:58.699212   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:58.699217   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:58.733519   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:58.733526   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:58.757252   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:58.757258   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:58.768487   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:58.768500   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:58.780256   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:58.780266   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:58.798114   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:58.798123   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:58.810448   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:58.810461   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:58.814867   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:58.814874   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:58.829164   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:16:58.829173   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:16:58.841115   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:58.841126   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:58.852669   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:58.852678   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:58.888072   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:58.888082   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:58.902640   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:16:58.902651   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:16:58.914192   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:58.914205   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:58.929480   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:58.929489   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:01.441690   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:06.442976   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:06.443138   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:06.458107   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:06.458175   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:06.469955   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:06.470025   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:06.480937   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:06.481002   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:06.491531   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:06.491595   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:06.502331   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:06.502398   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:06.519138   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:06.519205   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:06.533647   11681 logs.go:276] 0 containers: []
	W0507 11:17:06.533658   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:06.533711   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:06.544251   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:06.544269   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:06.544275   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:06.556445   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:06.556455   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:06.590257   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:06.590268   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:06.602366   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:06.602377   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:06.617667   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:06.617676   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:06.642797   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:06.642806   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:06.654558   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:06.654570   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:06.672188   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:06.672198   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:06.676626   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:06.676632   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:06.690232   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:06.690244   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:06.701500   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:06.701513   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:06.713000   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:06.713013   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:06.725251   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:06.725261   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:06.737089   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:06.737098   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:06.773743   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:06.773756   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:09.289590   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:14.291789   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:14.291903   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:14.303572   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:14.303635   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:14.314008   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:14.314068   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:14.324644   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:14.324717   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:14.335845   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:14.335921   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:14.346390   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:14.346459   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:14.359622   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:14.359684   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:14.369701   11681 logs.go:276] 0 containers: []
	W0507 11:17:14.369711   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:14.369764   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:14.380047   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:14.380063   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:14.380068   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:14.391511   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:14.391521   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:14.406647   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:14.406661   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:14.428799   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:14.428809   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:14.454197   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:14.454207   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:14.469244   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:14.469255   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:14.480680   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:14.480691   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:14.492858   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:14.492870   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:14.504918   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:14.504928   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:14.539670   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:14.539679   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:14.544029   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:14.544037   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:14.585127   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:14.585137   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:14.597423   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:14.597435   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:14.608740   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:14.608749   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:14.622357   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:14.622367   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:17.141058   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:22.143250   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:22.143368   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:22.156509   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:22.156582   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:22.166762   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:22.166826   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:22.177675   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:22.177751   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:22.188402   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:22.188467   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:22.198605   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:22.198673   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:22.209043   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:22.209106   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:22.219330   11681 logs.go:276] 0 containers: []
	W0507 11:17:22.219343   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:22.219402   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:22.230360   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:22.230377   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:22.230383   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:22.234856   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:22.234864   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:22.249860   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:22.249873   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:22.284620   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:22.284630   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:22.298696   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:22.298706   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:22.310410   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:22.310421   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:22.326694   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:22.326704   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:22.338714   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:22.338724   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:22.363130   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:22.363136   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:22.401944   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:22.401954   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:22.419738   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:22.419748   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:22.431295   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:22.431306   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:22.443062   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:22.443072   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:22.454378   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:22.454388   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:22.465832   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:22.465842   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:24.982062   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:29.984160   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:29.984263   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:29.999699   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:29.999776   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:30.009858   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:30.009931   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:30.020619   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:30.020692   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:30.030765   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:30.030824   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:30.041589   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:30.041660   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:30.052280   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:30.052344   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:30.062708   11681 logs.go:276] 0 containers: []
	W0507 11:17:30.062720   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:30.062780   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:30.079261   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:30.079279   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:30.079284   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:30.092820   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:30.092830   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:30.104626   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:30.104639   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:30.116238   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:30.116250   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:30.127620   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:30.127630   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:30.157524   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:30.157535   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:30.192308   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:30.192324   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:30.207449   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:30.207460   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:30.223067   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:30.223078   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:30.235695   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:30.235704   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:30.250627   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:30.250638   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:30.254925   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:30.254932   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:30.266808   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:30.266818   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:30.285424   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:30.285434   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:30.321268   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:30.321278   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:32.835498   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:37.837992   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:37.838188   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:37.857909   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:37.858005   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:37.871684   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:37.871755   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:37.884115   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:37.884186   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:37.894565   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:37.894634   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:37.904616   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:37.904689   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:37.920606   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:37.920677   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:37.930915   11681 logs.go:276] 0 containers: []
	W0507 11:17:37.930930   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:37.930993   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:37.941321   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:37.941338   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:37.941343   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:37.955416   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:37.955427   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:37.973349   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:37.973359   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:37.985077   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:37.985088   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:38.003220   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:38.003229   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:38.026649   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:38.026660   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:38.030687   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:38.030696   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:38.042062   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:38.042072   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:38.054246   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:38.054259   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:38.066421   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:38.066432   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:38.078906   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:38.078917   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:38.098346   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:38.098358   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:38.110141   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:38.110153   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:38.143253   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:38.143264   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:38.176898   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:38.176908   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:40.690175   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:45.692327   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:45.692456   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:45.703540   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:45.703612   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:45.713845   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:45.713926   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:45.724575   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:45.724647   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:45.735444   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:45.735511   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:45.745892   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:45.745961   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:45.756525   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:45.756588   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:45.769750   11681 logs.go:276] 0 containers: []
	W0507 11:17:45.769760   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:45.769815   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:45.785556   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:45.785572   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:45.785579   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:45.797441   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:45.797453   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:45.814984   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:45.814995   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:45.827205   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:45.827218   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:45.841055   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:45.841067   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:45.852951   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:45.852967   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:45.868331   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:45.868342   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:45.903230   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:45.903239   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:45.922673   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:45.922685   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:45.935018   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:45.935031   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:45.958715   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:45.958724   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:45.993797   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:45.993810   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:46.007959   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:46.009579   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:46.021386   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:46.021399   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:46.025761   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:46.025769   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:48.539012   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:53.541103   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:53.541213   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:53.553388   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:53.553463   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:53.565627   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:53.565706   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:53.577176   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:53.577255   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:53.588406   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:53.588503   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:53.600157   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:53.600233   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:53.618567   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:53.618641   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:53.629658   11681 logs.go:276] 0 containers: []
	W0507 11:17:53.629669   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:53.629731   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:53.641244   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:53.641261   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:53.641266   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:53.678245   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:53.678262   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:53.714285   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:53.714296   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:53.726489   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:53.726500   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:53.744822   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:53.744841   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:53.760727   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:53.760739   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:53.776536   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:53.776554   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:53.792002   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:53.792014   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:53.804565   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:53.804582   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:53.817068   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:53.817084   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:53.841929   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:53.841939   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:53.853724   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:53.853736   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:53.858447   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:53.858453   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:53.874861   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:53.874872   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:53.893716   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:53.893727   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:56.408333   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:01.410374   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:01.410460   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:18:01.422291   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:18:01.422357   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:18:01.432766   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:18:01.432829   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:18:01.443252   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:18:01.443332   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:18:01.453830   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:18:01.453910   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:18:01.465570   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:18:01.465638   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:18:01.475851   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:18:01.475920   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:18:01.485825   11681 logs.go:276] 0 containers: []
	W0507 11:18:01.485840   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:18:01.485899   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:18:01.501193   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:18:01.501219   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:18:01.501224   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:18:01.513241   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:18:01.513255   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:18:01.525194   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:18:01.525203   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:18:01.546662   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:18:01.546672   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:18:01.570153   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:18:01.570161   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:18:01.584331   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:18:01.584341   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:18:01.619087   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:18:01.619097   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:18:01.630865   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:18:01.630876   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:18:01.643139   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:18:01.643149   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:18:01.654628   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:18:01.654641   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:18:01.666807   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:18:01.666819   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:18:01.700136   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:18:01.700145   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:18:01.712076   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:18:01.712087   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:18:01.726703   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:18:01.726713   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:18:01.741624   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:18:01.741634   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:18:04.247930   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:09.250152   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:09.250304   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:18:09.261491   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:18:09.261570   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:18:09.272268   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:18:09.272334   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:18:09.282664   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:18:09.282737   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:18:09.300299   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:18:09.300365   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:18:09.315795   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:18:09.315870   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:18:09.326528   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:18:09.326591   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:18:09.337888   11681 logs.go:276] 0 containers: []
	W0507 11:18:09.337899   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:18:09.337956   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:18:09.348636   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:18:09.348651   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:18:09.348656   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:18:09.367961   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:18:09.367974   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:18:09.382761   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:18:09.382773   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:18:09.395292   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:18:09.395302   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:18:09.422188   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:18:09.422209   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:18:09.469802   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:18:09.469817   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:18:09.483836   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:18:09.483846   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:18:09.495472   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:18:09.495482   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:18:09.499749   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:18:09.499755   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:18:09.522952   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:18:09.522963   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:18:09.534483   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:18:09.534496   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:18:09.557664   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:18:09.557671   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:18:09.569255   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:18:09.569268   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:18:09.603687   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:18:09.603694   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:18:09.614946   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:18:09.614957   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:18:12.128330   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:17.130384   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:17.130514   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:18:17.142683   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:18:17.142767   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:18:17.153413   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:18:17.153479   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:18:17.168228   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:18:17.168296   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:18:17.179226   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:18:17.179303   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:18:17.188911   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:18:17.188976   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:18:17.199680   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:18:17.199739   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:18:17.210512   11681 logs.go:276] 0 containers: []
	W0507 11:18:17.210524   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:18:17.210581   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:18:17.221213   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:18:17.221239   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:18:17.221246   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:18:17.236329   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:18:17.236343   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:18:17.255893   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:18:17.255904   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:18:17.267018   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:18:17.267032   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:18:17.290023   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:18:17.290031   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:18:17.294728   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:18:17.294735   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:18:17.308964   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:18:17.308976   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:18:17.320354   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:18:17.320366   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:18:17.354841   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:18:17.354852   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:18:17.369740   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:18:17.369751   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:18:17.381953   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:18:17.381964   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:18:17.394114   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:18:17.394125   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:18:17.428417   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:18:17.428436   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:18:17.440673   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:18:17.440683   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:18:17.455137   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:18:17.455146   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:18:19.969081   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:24.971281   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:24.971454   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:18:24.983011   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:18:24.983087   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:18:24.994471   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:18:24.994544   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:18:25.004977   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:18:25.005045   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:18:25.015543   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:18:25.015613   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:18:25.025879   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:18:25.025950   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:18:25.036493   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:18:25.036558   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:18:25.047120   11681 logs.go:276] 0 containers: []
	W0507 11:18:25.047131   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:18:25.047188   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:18:25.058555   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:18:25.058572   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:18:25.058577   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:18:25.098641   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:18:25.098652   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:18:25.112641   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:18:25.112651   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:18:25.126475   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:18:25.126485   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:18:25.137762   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:18:25.137772   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:18:25.155559   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:18:25.155569   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:18:25.167185   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:18:25.167196   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:18:25.178552   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:18:25.178561   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:18:25.202943   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:18:25.202950   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:18:25.217270   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:18:25.217283   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:18:25.228550   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:18:25.228564   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:18:25.262196   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:18:25.262208   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:18:25.266902   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:18:25.266908   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:18:25.278508   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:18:25.278522   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:18:25.290132   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:18:25.290146   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:18:27.808619   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:32.810717   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:32.815147   11681 out.go:177] 
	W0507 11:18:32.819175   11681 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0507 11:18:32.819182   11681 out.go:239] * 
	* 
	W0507 11:18:32.819616   11681 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:18:32.831118   11681 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-776000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-05-07 11:18:32.930961 -0700 PDT m=+1293.173728585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-776000 -n running-upgrade-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-776000 -n running-upgrade-776000: exit status 2 (15.665943042s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-776000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-303000          | force-systemd-flag-303000 | jenkins | v1.33.0 | 07 May 24 11:08 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-484000              | force-systemd-env-484000  | jenkins | v1.33.0 | 07 May 24 11:08 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-484000           | force-systemd-env-484000  | jenkins | v1.33.0 | 07 May 24 11:08 PDT | 07 May 24 11:08 PDT |
	| start   | -p docker-flags-297000                | docker-flags-297000       | jenkins | v1.33.0 | 07 May 24 11:08 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-303000             | force-systemd-flag-303000 | jenkins | v1.33.0 | 07 May 24 11:08 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-303000          | force-systemd-flag-303000 | jenkins | v1.33.0 | 07 May 24 11:08 PDT | 07 May 24 11:08 PDT |
	| start   | -p cert-expiration-673000             | cert-expiration-673000    | jenkins | v1.33.0 | 07 May 24 11:08 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-297000 ssh               | docker-flags-297000       | jenkins | v1.33.0 | 07 May 24 11:08 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-297000 ssh               | docker-flags-297000       | jenkins | v1.33.0 | 07 May 24 11:08 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-297000                | docker-flags-297000       | jenkins | v1.33.0 | 07 May 24 11:08 PDT | 07 May 24 11:08 PDT |
	| start   | -p cert-options-048000                | cert-options-048000       | jenkins | v1.33.0 | 07 May 24 11:08 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-048000 ssh               | cert-options-048000       | jenkins | v1.33.0 | 07 May 24 11:09 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-048000 -- sudo        | cert-options-048000       | jenkins | v1.33.0 | 07 May 24 11:09 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-048000                | cert-options-048000       | jenkins | v1.33.0 | 07 May 24 11:09 PDT | 07 May 24 11:09 PDT |
	| start   | -p running-upgrade-776000             | minikube                  | jenkins | v1.26.0 | 07 May 24 11:09 PDT | 07 May 24 11:10 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-776000             | running-upgrade-776000    | jenkins | v1.33.0 | 07 May 24 11:10 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-673000             | cert-expiration-673000    | jenkins | v1.33.0 | 07 May 24 11:12 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-673000             | cert-expiration-673000    | jenkins | v1.33.0 | 07 May 24 11:12 PDT | 07 May 24 11:12 PDT |
	| start   | -p kubernetes-upgrade-133000          | kubernetes-upgrade-133000 | jenkins | v1.33.0 | 07 May 24 11:12 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-133000          | kubernetes-upgrade-133000 | jenkins | v1.33.0 | 07 May 24 11:12 PDT | 07 May 24 11:12 PDT |
	| start   | -p kubernetes-upgrade-133000          | kubernetes-upgrade-133000 | jenkins | v1.33.0 | 07 May 24 11:12 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-133000          | kubernetes-upgrade-133000 | jenkins | v1.33.0 | 07 May 24 11:12 PDT | 07 May 24 11:12 PDT |
	| start   | -p stopped-upgrade-069000             | minikube                  | jenkins | v1.26.0 | 07 May 24 11:12 PDT | 07 May 24 11:13 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-069000 stop           | minikube                  | jenkins | v1.26.0 | 07 May 24 11:13 PDT | 07 May 24 11:13 PDT |
	| start   | -p stopped-upgrade-069000             | stopped-upgrade-069000    | jenkins | v1.33.0 | 07 May 24 11:13 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/07 11:13:19
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 11:13:19.642023   11892 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:13:19.642168   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:13:19.642172   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:13:19.642174   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:13:19.642308   11892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:13:19.643392   11892 out.go:298] Setting JSON to false
	I0507 11:13:19.660993   11892 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6170,"bootTime":1715099429,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:13:19.661053   11892 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:13:19.665835   11892 out.go:177] * [stopped-upgrade-069000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:13:19.673851   11892 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:13:19.675356   11892 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:13:19.673955   11892 notify.go:220] Checking for updates...
	I0507 11:13:19.680782   11892 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:13:19.683844   11892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:13:19.686733   11892 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:13:19.689802   11892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:13:19.693106   11892 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:13:19.696740   11892 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0507 11:13:19.699786   11892 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:13:19.703793   11892 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 11:13:19.710754   11892 start.go:297] selected driver: qemu2
	I0507 11:13:19.710761   11892 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-069000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0507 11:13:19.710809   11892 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:13:19.713235   11892 cni.go:84] Creating CNI manager for ""
	I0507 11:13:19.713255   11892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:13:19.713280   11892 start.go:340] cluster config:
	{Name:stopped-upgrade-069000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0507 11:13:19.713325   11892 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:13:19.718748   11892 out.go:177] * Starting "stopped-upgrade-069000" primary control-plane node in "stopped-upgrade-069000" cluster
	I0507 11:13:19.722843   11892 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0507 11:13:19.722860   11892 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0507 11:13:19.722868   11892 cache.go:56] Caching tarball of preloaded images
	I0507 11:13:19.722930   11892 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:13:19.722935   11892 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0507 11:13:19.722990   11892 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/config.json ...
	I0507 11:13:19.723478   11892 start.go:360] acquireMachinesLock for stopped-upgrade-069000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:13:19.723508   11892 start.go:364] duration metric: took 24.667µs to acquireMachinesLock for "stopped-upgrade-069000"
	I0507 11:13:19.723517   11892 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:13:19.723523   11892 fix.go:54] fixHost starting: 
	I0507 11:13:19.723621   11892 fix.go:112] recreateIfNeeded on stopped-upgrade-069000: state=Stopped err=<nil>
	W0507 11:13:19.723629   11892 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:13:19.742351   11892 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-069000" ...
	I0507 11:13:16.893555   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:19.746927   11892 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51437-:22,hostfwd=tcp::51438-:2376,hostname=stopped-upgrade-069000 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/disk.qcow2
	I0507 11:13:19.792677   11892 main.go:141] libmachine: STDOUT: 
	I0507 11:13:19.792718   11892 main.go:141] libmachine: STDERR: 
	I0507 11:13:19.792724   11892 main.go:141] libmachine: Waiting for VM to start (ssh -p 51437 docker@127.0.0.1)...
	I0507 11:13:21.896085   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:21.896442   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:13:21.929086   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:13:21.929215   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:13:21.949545   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:13:21.949630   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:13:21.962823   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:13:21.962891   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:13:21.974614   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:13:21.974677   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:13:21.984913   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:13:21.984986   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:13:21.995159   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:13:21.995225   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:13:22.008225   11681 logs.go:276] 0 containers: []
	W0507 11:13:22.008237   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:13:22.008299   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:13:22.018584   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:13:22.018602   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:13:22.018608   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:13:22.036247   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:13:22.036258   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:13:22.047892   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:13:22.047904   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:13:22.086729   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:13:22.086737   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:13:22.091406   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:13:22.091415   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:13:22.116683   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:13:22.116693   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:13:22.133265   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:13:22.133275   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:13:22.146688   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:13:22.146701   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:13:22.161580   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:13:22.161591   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:13:22.172326   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:13:22.172338   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:13:22.207429   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:13:22.207442   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:13:22.231138   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:13:22.231147   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:13:22.245724   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:13:22.245734   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:13:22.257629   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:13:22.257642   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:13:22.276025   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:13:22.276036   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:13:22.287641   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:13:22.287654   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:13:24.804356   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:29.806824   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:29.807039   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:13:29.822948   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:13:29.823017   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:13:29.833942   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:13:29.834012   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:13:29.846960   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:13:29.847031   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:13:29.857664   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:13:29.857728   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:13:29.870216   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:13:29.870284   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:13:29.881100   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:13:29.881158   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:13:29.892160   11681 logs.go:276] 0 containers: []
	W0507 11:13:29.892169   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:13:29.892226   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:13:29.902697   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:13:29.902713   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:13:29.902718   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:13:29.919968   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:13:29.919977   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:13:29.931541   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:13:29.931550   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:13:29.956412   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:13:29.956419   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:13:29.960746   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:13:29.960756   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:13:29.994769   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:13:29.994779   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:13:30.008905   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:13:30.008919   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:13:30.023920   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:13:30.023928   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:13:30.039822   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:13:30.039836   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:13:30.051722   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:13:30.051735   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:13:30.090498   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:13:30.090505   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:13:30.115746   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:13:30.115757   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:13:30.131954   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:13:30.131963   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:13:30.143745   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:13:30.143755   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:13:30.158675   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:13:30.158685   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:13:30.179750   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:13:30.179760   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:13:32.700623   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:37.711034   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:37.711399   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:13:37.746574   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:13:37.746704   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:13:37.766121   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:13:37.766207   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:13:37.780665   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:13:37.780743   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:13:37.792719   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:13:37.792784   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:13:37.808995   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:13:37.809056   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:13:37.819659   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:13:37.819727   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:13:37.829767   11681 logs.go:276] 0 containers: []
	W0507 11:13:37.829777   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:13:37.829828   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:13:37.840114   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:13:37.840130   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:13:37.840135   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:13:37.851657   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:13:37.851670   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:13:37.875786   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:13:37.875797   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:13:37.880120   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:13:37.880127   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:13:37.914675   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:13:37.914686   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:13:37.929789   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:13:37.929804   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:13:37.940822   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:13:37.940836   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:13:37.952579   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:13:37.952589   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:13:37.967459   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:13:37.967469   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:13:37.979733   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:13:37.979744   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:13:37.995429   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:13:37.995439   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:13:38.009437   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:13:38.009446   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:13:38.033493   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:13:38.033504   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:13:38.050331   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:13:38.050340   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:13:38.067508   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:13:38.067518   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:13:38.079192   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:13:38.079201   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:13:40.622633   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:39.944802   11892 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/config.json ...
	I0507 11:13:39.945546   11892 machine.go:94] provisionDockerMachine start ...
	I0507 11:13:39.945729   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:39.946154   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:39.946167   11892 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 11:13:40.042522   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 11:13:40.042557   11892 buildroot.go:166] provisioning hostname "stopped-upgrade-069000"
	I0507 11:13:40.042647   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:40.042855   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:40.042867   11892 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-069000 && echo "stopped-upgrade-069000" | sudo tee /etc/hostname
	I0507 11:13:40.121753   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-069000
	
	I0507 11:13:40.121816   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:40.121946   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:40.121956   11892 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-069000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-069000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-069000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 11:13:40.195290   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 11:13:40.195302   11892 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18804-8175/.minikube CaCertPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18804-8175/.minikube}
	I0507 11:13:40.195313   11892 buildroot.go:174] setting up certificates
	I0507 11:13:40.195319   11892 provision.go:84] configureAuth start
	I0507 11:13:40.195328   11892 provision.go:143] copyHostCerts
	I0507 11:13:40.195394   11892 exec_runner.go:144] found /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.pem, removing ...
	I0507 11:13:40.195404   11892 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.pem
	I0507 11:13:40.195533   11892 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.pem (1078 bytes)
	I0507 11:13:40.195725   11892 exec_runner.go:144] found /Users/jenkins/minikube-integration/18804-8175/.minikube/cert.pem, removing ...
	I0507 11:13:40.195728   11892 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18804-8175/.minikube/cert.pem
	I0507 11:13:40.195777   11892 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18804-8175/.minikube/cert.pem (1123 bytes)
	I0507 11:13:40.195913   11892 exec_runner.go:144] found /Users/jenkins/minikube-integration/18804-8175/.minikube/key.pem, removing ...
	I0507 11:13:40.195916   11892 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18804-8175/.minikube/key.pem
	I0507 11:13:40.195959   11892 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18804-8175/.minikube/key.pem (1675 bytes)
	I0507 11:13:40.196048   11892 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-069000 san=[127.0.0.1 localhost minikube stopped-upgrade-069000]
	I0507 11:13:40.251626   11892 provision.go:177] copyRemoteCerts
	I0507 11:13:40.251656   11892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 11:13:40.251662   11892 sshutil.go:53] new ssh client: &{IP:localhost Port:51437 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/id_rsa Username:docker}
	I0507 11:13:40.290560   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0507 11:13:40.298251   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0507 11:13:40.305218   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0507 11:13:40.311762   11892 provision.go:87] duration metric: took 116.302333ms to configureAuth
	I0507 11:13:40.311772   11892 buildroot.go:189] setting minikube options for container-runtime
	I0507 11:13:40.311882   11892 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:13:40.311917   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:40.312008   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:40.312013   11892 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 11:13:40.382240   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 11:13:40.382250   11892 buildroot.go:70] root file system type: tmpfs
	I0507 11:13:40.382314   11892 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 11:13:40.382389   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:40.382514   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:40.382550   11892 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 11:13:40.457303   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 11:13:40.457365   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:40.457491   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:40.457500   11892 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 11:13:40.840104   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 11:13:40.840118   11892 machine.go:97] duration metric: took 893.556542ms to provisionDockerMachine
	I0507 11:13:40.840125   11892 start.go:293] postStartSetup for "stopped-upgrade-069000" (driver="qemu2")
	I0507 11:13:40.840132   11892 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 11:13:40.840211   11892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 11:13:40.840222   11892 sshutil.go:53] new ssh client: &{IP:localhost Port:51437 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/id_rsa Username:docker}
	I0507 11:13:40.878018   11892 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 11:13:40.879147   11892 info.go:137] Remote host: Buildroot 2021.02.12
	I0507 11:13:40.879155   11892 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18804-8175/.minikube/addons for local assets ...
	I0507 11:13:40.879231   11892 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18804-8175/.minikube/files for local assets ...
	I0507 11:13:40.879324   11892 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18804-8175/.minikube/files/etc/ssl/certs/94222.pem -> 94222.pem in /etc/ssl/certs
	I0507 11:13:40.879412   11892 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 11:13:40.881874   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/files/etc/ssl/certs/94222.pem --> /etc/ssl/certs/94222.pem (1708 bytes)
	I0507 11:13:40.888466   11892 start.go:296] duration metric: took 48.283166ms for postStartSetup
	I0507 11:13:40.888479   11892 fix.go:56] duration metric: took 21.152670541s for fixHost
	I0507 11:13:40.888510   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:40.888615   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:40.888619   11892 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 11:13:40.958702   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715105621.134925421
	
	I0507 11:13:40.958711   11892 fix.go:216] guest clock: 1715105621.134925421
	I0507 11:13:40.958715   11892 fix.go:229] Guest: 2024-05-07 11:13:41.134925421 -0700 PDT Remote: 2024-05-07 11:13:40.88848 -0700 PDT m=+21.257240584 (delta=246.445421ms)
	I0507 11:13:40.958731   11892 fix.go:200] guest clock delta is within tolerance: 246.445421ms
	I0507 11:13:40.958734   11892 start.go:83] releasing machines lock for "stopped-upgrade-069000", held for 21.222858042s
	I0507 11:13:40.958805   11892 ssh_runner.go:195] Run: cat /version.json
	I0507 11:13:40.958816   11892 sshutil.go:53] new ssh client: &{IP:localhost Port:51437 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/id_rsa Username:docker}
	I0507 11:13:40.958823   11892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 11:13:40.958850   11892 sshutil.go:53] new ssh client: &{IP:localhost Port:51437 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/id_rsa Username:docker}
	W0507 11:13:40.959462   11892 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51437: connect: connection refused
	I0507 11:13:40.959491   11892 retry.go:31] will retry after 213.71735ms: dial tcp [::1]:51437: connect: connection refused
	W0507 11:13:41.220056   11892 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0507 11:13:41.220246   11892 ssh_runner.go:195] Run: systemctl --version
	I0507 11:13:41.224222   11892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0507 11:13:41.227474   11892 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 11:13:41.227524   11892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0507 11:13:41.232860   11892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0507 11:13:41.240289   11892 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 11:13:41.240300   11892 start.go:494] detecting cgroup driver to use...
	I0507 11:13:41.240399   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 11:13:41.249905   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0507 11:13:41.253444   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 11:13:41.256872   11892 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 11:13:41.256895   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 11:13:41.260321   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 11:13:41.263793   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 11:13:41.266918   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 11:13:41.269606   11892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 11:13:41.272513   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 11:13:41.275906   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 11:13:41.279140   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 11:13:41.281919   11892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 11:13:41.284655   11892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 11:13:41.287620   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:13:41.356814   11892 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 11:13:41.367875   11892 start.go:494] detecting cgroup driver to use...
	I0507 11:13:41.367955   11892 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 11:13:41.372509   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 11:13:41.377015   11892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 11:13:41.384088   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 11:13:41.388723   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 11:13:41.393442   11892 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 11:13:41.433114   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 11:13:41.438203   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 11:13:41.443421   11892 ssh_runner.go:195] Run: which cri-dockerd
	I0507 11:13:41.444601   11892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 11:13:41.447503   11892 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 11:13:41.452857   11892 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 11:13:41.530763   11892 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 11:13:41.606235   11892 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 11:13:41.606299   11892 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 11:13:41.611537   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:13:41.696054   11892 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 11:13:42.844063   11892 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.146855417s)
	I0507 11:13:42.844119   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 11:13:42.848745   11892 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0507 11:13:42.855873   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 11:13:42.860510   11892 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 11:13:42.940200   11892 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 11:13:43.016094   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:13:43.092860   11892 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 11:13:43.098759   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 11:13:43.103812   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:13:43.162376   11892 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 11:13:43.208312   11892 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 11:13:43.208413   11892 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 11:13:43.210340   11892 start.go:562] Will wait 60s for crictl version
	I0507 11:13:43.210374   11892 ssh_runner.go:195] Run: which crictl
	I0507 11:13:43.211729   11892 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 11:13:43.227092   11892 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0507 11:13:43.227165   11892 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 11:13:43.244824   11892 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 11:13:43.264923   11892 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0507 11:13:43.264995   11892 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0507 11:13:43.266334   11892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 11:13:43.270554   11892 kubeadm.go:877] updating cluster {Name:stopped-upgrade-069000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0507 11:13:43.270604   11892 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0507 11:13:43.270645   11892 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 11:13:43.281236   11892 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0507 11:13:43.281245   11892 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0507 11:13:43.281292   11892 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 11:13:43.284438   11892 ssh_runner.go:195] Run: which lz4
	I0507 11:13:43.285672   11892 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0507 11:13:43.286987   11892 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0507 11:13:43.287006   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0507 11:13:44.003239   11892 docker.go:649] duration metric: took 716.960417ms to copy over tarball
	I0507 11:13:44.003300   11892 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0507 11:13:45.629608   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:45.629698   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:13:45.641446   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:13:45.641518   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:13:45.651924   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:13:45.651991   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:13:45.664269   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:13:45.664341   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:13:45.674945   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:13:45.675010   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:13:45.686485   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:13:45.686561   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:13:45.697148   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:13:45.697218   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:13:45.708064   11681 logs.go:276] 0 containers: []
	W0507 11:13:45.708074   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:13:45.708131   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:13:45.719363   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:13:45.719380   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:13:45.719385   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:13:45.761110   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:13:45.761120   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:13:45.765304   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:13:45.765310   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:13:45.779097   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:13:45.779111   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:13:45.809519   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:13:45.809532   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:13:45.823717   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:13:45.823728   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:13:45.838160   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:13:45.838174   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:13:45.872283   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:13:45.872294   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:13:45.884495   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:13:45.884507   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:13:45.899423   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:13:45.899436   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:13:45.918282   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:13:45.918294   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:13:45.933466   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:13:45.933475   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:13:45.944874   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:13:45.944883   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:13:45.957497   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:13:45.957507   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:13:45.974555   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:13:45.974566   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:13:45.986172   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:13:45.986183   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:13:45.168535   11892 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.164225875s)
	I0507 11:13:45.168550   11892 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0507 11:13:45.184828   11892 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 11:13:45.188152   11892 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0507 11:13:45.193280   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:13:45.281973   11892 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 11:13:46.821282   11892 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.538112542s)
	I0507 11:13:46.821391   11892 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 11:13:46.833016   11892 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0507 11:13:46.833026   11892 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0507 11:13:46.833044   11892 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0507 11:13:46.839760   11892 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:13:46.839780   11892 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:13:46.839847   11892 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:13:46.839866   11892 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:13:46.839904   11892 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0507 11:13:46.839915   11892 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0507 11:13:46.839979   11892 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0507 11:13:46.840377   11892 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:13:46.847096   11892 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0507 11:13:46.847262   11892 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:13:46.847965   11892 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0507 11:13:46.848114   11892 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:13:46.847979   11892 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:13:46.848002   11892 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:13:46.848137   11892 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0507 11:13:46.847968   11892 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W0507 11:13:47.639203   11892 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0507 11:13:47.639500   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:13:47.658595   11892 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0507 11:13:47.658631   11892 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:13:47.658708   11892 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:13:47.676777   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0507 11:13:47.676921   11892 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0507 11:13:47.678719   11892 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0507 11:13:47.678736   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0507 11:13:47.705619   11892 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0507 11:13:47.705635   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0507 11:13:47.838857   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:13:47.874122   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0507 11:13:47.928106   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:13:47.947638   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0507 11:13:47.960157   11892 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0507 11:13:47.960199   11892 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0507 11:13:47.960208   11892 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0507 11:13:47.960215   11892 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:13:47.960217   11892 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0507 11:13:47.960271   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0507 11:13:47.960281   11892 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0507 11:13:47.960271   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:13:47.960291   11892 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:13:47.960314   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:13:47.962805   11892 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0507 11:13:47.962819   11892 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0507 11:13:47.962862   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0507 11:13:47.987867   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0507 11:13:47.987976   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0507 11:13:47.987989   11892 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0507 11:13:47.992811   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0507 11:13:47.995130   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0507 11:13:47.995139   11892 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0507 11:13:47.995149   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0507 11:13:47.995232   11892 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0507 11:13:47.996938   11892 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0507 11:13:47.996961   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0507 11:13:48.013809   11892 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0507 11:13:48.013823   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0507 11:13:48.060279   11892 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0507 11:13:48.072762   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:13:48.087425   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0507 11:13:48.097631   11892 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0507 11:13:48.097759   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:13:48.100833   11892 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0507 11:13:48.100857   11892 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:13:48.100904   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:13:48.120917   11892 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0507 11:13:48.120939   11892 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0507 11:13:48.121000   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0507 11:13:48.125444   11892 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0507 11:13:48.125468   11892 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:13:48.125524   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:13:48.138751   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0507 11:13:48.173763   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0507 11:13:48.173851   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0507 11:13:48.173962   11892 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0507 11:13:48.184724   11892 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0507 11:13:48.184757   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0507 11:13:48.264581   11892 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0507 11:13:48.264596   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0507 11:13:48.400521   11892 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0507 11:13:48.400550   11892 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0507 11:13:48.400560   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0507 11:13:48.437574   11892 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0507 11:13:48.437613   11892 cache_images.go:92] duration metric: took 1.60345075s to LoadCachedImages
	W0507 11:13:48.437674   11892 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0507 11:13:48.437681   11892 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0507 11:13:48.437739   11892 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-069000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 11:13:48.437801   11892 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0507 11:13:48.451736   11892 cni.go:84] Creating CNI manager for ""
	I0507 11:13:48.451749   11892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:13:48.451753   11892 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0507 11:13:48.451763   11892 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-069000 NodeName:stopped-upgrade-069000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0507 11:13:48.451836   11892 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-069000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0507 11:13:48.451889   11892 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0507 11:13:48.454820   11892 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 11:13:48.454843   11892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0507 11:13:48.457953   11892 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0507 11:13:48.462957   11892 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 11:13:48.468160   11892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0507 11:13:48.473578   11892 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0507 11:13:48.474812   11892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 11:13:48.478665   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:13:48.550329   11892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 11:13:48.556923   11892 certs.go:68] Setting up /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000 for IP: 10.0.2.15
	I0507 11:13:48.556932   11892 certs.go:194] generating shared ca certs ...
	I0507 11:13:48.556941   11892 certs.go:226] acquiring lock for ca certs: {Name:mk0fe80b930eecdc420c4c0ef01e5eae3fea7733 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:13:48.557106   11892 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.key
	I0507 11:13:48.557146   11892 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/proxy-client-ca.key
	I0507 11:13:48.557151   11892 certs.go:256] generating profile certs ...
	I0507 11:13:48.557214   11892 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/client.key
	I0507 11:13:48.557235   11892 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.key.a96e11d5
	I0507 11:13:48.557248   11892 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.crt.a96e11d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0507 11:13:48.718420   11892 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.crt.a96e11d5 ...
	I0507 11:13:48.718436   11892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.crt.a96e11d5: {Name:mk8136986f918f33932b70467945a54e6f814a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:13:48.718756   11892 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.key.a96e11d5 ...
	I0507 11:13:48.718761   11892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.key.a96e11d5: {Name:mk33d042cf0514914cf7108135301e8f542454ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:13:48.718885   11892 certs.go:381] copying /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.crt.a96e11d5 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.crt
	I0507 11:13:48.719044   11892 certs.go:385] copying /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.key.a96e11d5 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.key
	I0507 11:13:48.719189   11892 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/proxy-client.key
	I0507 11:13:48.719326   11892 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/9422.pem (1338 bytes)
	W0507 11:13:48.719356   11892 certs.go:480] ignoring /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/9422_empty.pem, impossibly tiny 0 bytes
	I0507 11:13:48.719362   11892 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca-key.pem (1679 bytes)
	I0507 11:13:48.719381   11892 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem (1078 bytes)
	I0507 11:13:48.719405   11892 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem (1123 bytes)
	I0507 11:13:48.719425   11892 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/key.pem (1675 bytes)
	I0507 11:13:48.719463   11892 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/files/etc/ssl/certs/94222.pem (1708 bytes)
	I0507 11:13:48.719809   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 11:13:48.726830   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 11:13:48.734526   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 11:13:48.742064   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0507 11:13:48.749362   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0507 11:13:48.756997   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0507 11:13:48.763443   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 11:13:48.770542   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0507 11:13:48.777717   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 11:13:48.784513   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/9422.pem --> /usr/share/ca-certificates/9422.pem (1338 bytes)
	I0507 11:13:48.791148   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/files/etc/ssl/certs/94222.pem --> /usr/share/ca-certificates/94222.pem (1708 bytes)
	I0507 11:13:48.798033   11892 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0507 11:13:48.803292   11892 ssh_runner.go:195] Run: openssl version
	I0507 11:13:48.805239   11892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94222.pem && ln -fs /usr/share/ca-certificates/94222.pem /etc/ssl/certs/94222.pem"
	I0507 11:13:48.808210   11892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94222.pem
	I0507 11:13:48.809495   11892 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 17:57 /usr/share/ca-certificates/94222.pem
	I0507 11:13:48.809511   11892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94222.pem
	I0507 11:13:48.811209   11892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94222.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 11:13:48.814318   11892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 11:13:48.817590   11892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 11:13:48.819027   11892 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:09 /usr/share/ca-certificates/minikubeCA.pem
	I0507 11:13:48.819046   11892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 11:13:48.820947   11892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 11:13:48.823696   11892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9422.pem && ln -fs /usr/share/ca-certificates/9422.pem /etc/ssl/certs/9422.pem"
	I0507 11:13:48.826847   11892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9422.pem
	I0507 11:13:48.828488   11892 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 17:57 /usr/share/ca-certificates/9422.pem
	I0507 11:13:48.828512   11892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9422.pem
	I0507 11:13:48.830236   11892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9422.pem /etc/ssl/certs/51391683.0"
	I0507 11:13:48.833319   11892 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 11:13:48.834709   11892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0507 11:13:48.838007   11892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0507 11:13:48.840026   11892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0507 11:13:48.841965   11892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0507 11:13:48.843844   11892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0507 11:13:48.845688   11892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0507 11:13:48.848174   11892 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-069000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0507 11:13:48.848242   11892 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 11:13:48.858217   11892 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0507 11:13:48.861362   11892 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0507 11:13:48.861368   11892 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0507 11:13:48.861371   11892 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0507 11:13:48.861392   11892 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0507 11:13:48.864084   11892 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0507 11:13:48.864362   11892 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-069000" does not appear in /Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:13:48.864465   11892 kubeconfig.go:62] /Users/jenkins/minikube-integration/18804-8175/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-069000" cluster setting kubeconfig missing "stopped-upgrade-069000" context setting]
	I0507 11:13:48.864654   11892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/kubeconfig: {Name:mk2a7794036857fd378216b160722b418b125ac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:13:48.865116   11892 kapi.go:59] client config for stopped-upgrade-069000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/client.key", CAFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102317d80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 11:13:48.865440   11892 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0507 11:13:48.868141   11892 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-069000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0507 11:13:48.868147   11892 kubeadm.go:1154] stopping kube-system containers ...
	I0507 11:13:48.868187   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 11:13:48.878606   11892 docker.go:483] Stopping containers: [2cb73641d9d8 863c2a33feb6 c1225d1b2bab a3e5338202fe f78da15e98b0 b050dd24f9a8 94b863037f9c 9023fe75c28f]
	I0507 11:13:48.878671   11892 ssh_runner.go:195] Run: docker stop 2cb73641d9d8 863c2a33feb6 c1225d1b2bab a3e5338202fe f78da15e98b0 b050dd24f9a8 94b863037f9c 9023fe75c28f
	I0507 11:13:48.889188   11892 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0507 11:13:48.894689   11892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 11:13:48.897751   11892 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 11:13:48.897760   11892 kubeadm.go:156] found existing configuration files:
	
	I0507 11:13:48.897780   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/admin.conf
	I0507 11:13:48.900366   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0507 11:13:48.900389   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0507 11:13:48.903016   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/kubelet.conf
	I0507 11:13:48.906200   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0507 11:13:48.906226   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0507 11:13:48.908840   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/controller-manager.conf
	I0507 11:13:48.911190   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0507 11:13:48.911211   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0507 11:13:48.914156   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/scheduler.conf
	I0507 11:13:48.916531   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0507 11:13:48.916555   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0507 11:13:48.919103   11892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 11:13:48.922066   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:13:48.944484   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:13:49.567933   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:13:48.513218   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:49.705730   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:13:49.738019   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:13:49.759544   11892 api_server.go:52] waiting for apiserver process to appear ...
	I0507 11:13:49.759622   11892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:13:50.262186   11892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:13:50.762298   11892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:13:50.771239   11892 api_server.go:72] duration metric: took 1.0111095s to wait for apiserver process to appear ...
	I0507 11:13:50.771250   11892 api_server.go:88] waiting for apiserver healthz status ...
	I0507 11:13:50.771260   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:53.518203   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:53.518331   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:13:53.529626   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:13:53.529697   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:13:53.540960   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:13:53.541031   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:13:53.556247   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:13:53.556316   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:13:53.566922   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:13:53.566987   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:13:53.577544   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:13:53.577620   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:13:53.588642   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:13:53.588711   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:13:53.599425   11681 logs.go:276] 0 containers: []
	W0507 11:13:53.599437   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:13:53.599492   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:13:53.611024   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:13:53.611043   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:13:53.611053   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:13:53.635736   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:13:53.635757   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:13:53.676336   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:13:53.676356   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:13:53.718682   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:13:53.718699   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:13:53.734442   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:13:53.734453   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:13:53.750885   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:13:53.750897   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:13:53.766081   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:13:53.766095   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:13:53.791771   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:13:53.791785   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:13:53.812758   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:13:53.812778   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:13:53.828814   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:13:53.828828   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:13:53.833888   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:13:53.833900   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:13:53.848749   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:13:53.848762   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:13:53.861078   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:13:53.861091   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:13:53.873386   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:13:53.873397   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:13:53.891521   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:13:53.891532   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:13:53.903225   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:13:53.903236   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:13:55.775418   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:55.775471   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:56.420120   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:00.777472   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:00.777541   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:01.424025   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:01.424273   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:14:01.448593   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:14:01.448712   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:14:01.465655   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:14:01.465741   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:14:01.478494   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:14:01.478567   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:14:01.489436   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:14:01.489506   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:14:01.499879   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:14:01.499953   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:14:01.510356   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:14:01.510429   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:14:01.522317   11681 logs.go:276] 0 containers: []
	W0507 11:14:01.522326   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:14:01.522385   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:14:01.533045   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:14:01.533062   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:14:01.533067   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:14:01.547288   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:14:01.547299   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:14:01.564973   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:14:01.564984   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:14:01.583669   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:14:01.583680   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:14:01.598746   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:14:01.598757   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:14:01.609926   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:14:01.609938   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:14:01.633937   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:14:01.633948   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:14:01.652973   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:14:01.652984   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:14:01.668174   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:14:01.668183   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:14:01.679396   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:14:01.679406   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:14:01.702080   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:14:01.702092   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:14:01.740917   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:14:01.740925   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:14:01.745510   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:14:01.745520   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:14:01.758807   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:14:01.758823   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:14:01.770796   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:14:01.770808   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:14:01.806678   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:14:01.806691   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:14:04.325066   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:05.778976   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:05.779006   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:09.328231   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:09.328343   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:14:09.339505   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:14:09.339578   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:14:09.349514   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:14:09.349590   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:14:09.360748   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:14:09.360826   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:14:09.371197   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:14:09.371269   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:14:09.381810   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:14:09.381880   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:14:09.391756   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:14:09.391825   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:14:09.401904   11681 logs.go:276] 0 containers: []
	W0507 11:14:09.401915   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:14:09.401969   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:14:09.412122   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:14:09.412139   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:14:09.412146   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:14:09.430049   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:14:09.430059   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:14:09.446929   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:14:09.446940   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:14:09.451378   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:14:09.451385   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:14:09.462467   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:14:09.462478   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:14:09.474058   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:14:09.474070   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:14:09.486000   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:14:09.486011   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:14:09.510147   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:14:09.510158   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:14:09.523971   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:14:09.523982   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:14:09.563384   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:14:09.563394   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:14:09.587370   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:14:09.587381   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:14:09.601195   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:14:09.601208   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:14:09.613290   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:14:09.613303   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:14:09.628765   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:14:09.628779   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:14:09.668442   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:14:09.668455   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:14:09.683592   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:14:09.683604   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:14:10.780498   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:10.780541   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:12.203595   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:15.781672   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:15.781716   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:17.206420   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:17.206620   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:14:17.218318   11681 logs.go:276] 2 containers: [eff257f8231c 226bf10c3f6e]
	I0507 11:14:17.218400   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:14:17.232213   11681 logs.go:276] 2 containers: [66bdc270ebc1 090f2479094d]
	I0507 11:14:17.232292   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:14:17.243076   11681 logs.go:276] 1 containers: [73fa526a6cc2]
	I0507 11:14:17.243146   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:14:17.253769   11681 logs.go:276] 2 containers: [221c34b17b70 d10c62a82fe9]
	I0507 11:14:17.253837   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:14:17.264167   11681 logs.go:276] 1 containers: [d4bb3e16f58f]
	I0507 11:14:17.264235   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:14:17.275562   11681 logs.go:276] 2 containers: [9cdcee8a3551 6ba6837d6418]
	I0507 11:14:17.275631   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:14:17.285753   11681 logs.go:276] 0 containers: []
	W0507 11:14:17.285762   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:14:17.285815   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:14:17.296342   11681 logs.go:276] 1 containers: [be5706a7b458]
	I0507 11:14:17.296359   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:14:17.296365   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:14:17.334709   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:14:17.334718   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:14:17.373795   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:14:17.373806   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:14:17.378769   11681 logs.go:123] Gathering logs for coredns [73fa526a6cc2] ...
	I0507 11:14:17.378778   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73fa526a6cc2"
	I0507 11:14:17.390483   11681 logs.go:123] Gathering logs for kube-controller-manager [9cdcee8a3551] ...
	I0507 11:14:17.390495   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cdcee8a3551"
	I0507 11:14:17.408648   11681 logs.go:123] Gathering logs for kube-controller-manager [6ba6837d6418] ...
	I0507 11:14:17.408659   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ba6837d6418"
	I0507 11:14:17.423288   11681 logs.go:123] Gathering logs for storage-provisioner [be5706a7b458] ...
	I0507 11:14:17.423300   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be5706a7b458"
	I0507 11:14:17.434792   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:14:17.434803   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:14:17.458475   11681 logs.go:123] Gathering logs for kube-scheduler [221c34b17b70] ...
	I0507 11:14:17.458483   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221c34b17b70"
	I0507 11:14:17.470537   11681 logs.go:123] Gathering logs for kube-scheduler [d10c62a82fe9] ...
	I0507 11:14:17.470548   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10c62a82fe9"
	I0507 11:14:17.491400   11681 logs.go:123] Gathering logs for kube-proxy [d4bb3e16f58f] ...
	I0507 11:14:17.491411   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4bb3e16f58f"
	I0507 11:14:17.504359   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:14:17.504371   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:14:17.543843   11681 logs.go:123] Gathering logs for kube-apiserver [eff257f8231c] ...
	I0507 11:14:17.543852   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eff257f8231c"
	I0507 11:14:17.558623   11681 logs.go:123] Gathering logs for kube-apiserver [226bf10c3f6e] ...
	I0507 11:14:17.558637   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 226bf10c3f6e"
	I0507 11:14:17.584127   11681 logs.go:123] Gathering logs for etcd [66bdc270ebc1] ...
	I0507 11:14:17.584140   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66bdc270ebc1"
	I0507 11:14:17.597832   11681 logs.go:123] Gathering logs for etcd [090f2479094d] ...
	I0507 11:14:17.597843   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 090f2479094d"
	I0507 11:14:20.114962   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:20.782863   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:20.782905   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:25.117524   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:25.117614   11681 kubeadm.go:591] duration metric: took 4m4.150827667s to restartPrimaryControlPlane
	W0507 11:14:25.117693   11681 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0507 11:14:25.117725   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0507 11:14:26.124627   11681 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0068555s)
	I0507 11:14:26.124707   11681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 11:14:26.129633   11681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 11:14:26.132426   11681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 11:14:26.135103   11681 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 11:14:26.135109   11681 kubeadm.go:156] found existing configuration files:
	
	I0507 11:14:26.135134   11681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/admin.conf
	I0507 11:14:26.138319   11681 kubeadm.go:162] "https://control-plane.minikube.internal:51264" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0507 11:14:26.138346   11681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0507 11:14:26.141291   11681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/kubelet.conf
	I0507 11:14:26.143658   11681 kubeadm.go:162] "https://control-plane.minikube.internal:51264" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0507 11:14:26.143678   11681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0507 11:14:26.146806   11681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/controller-manager.conf
	I0507 11:14:26.149786   11681 kubeadm.go:162] "https://control-plane.minikube.internal:51264" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0507 11:14:26.149811   11681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0507 11:14:26.152522   11681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/scheduler.conf
	I0507 11:14:26.155050   11681 kubeadm.go:162] "https://control-plane.minikube.internal:51264" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51264 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0507 11:14:26.155069   11681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0507 11:14:26.158085   11681 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0507 11:14:26.175140   11681 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0507 11:14:26.175184   11681 kubeadm.go:309] [preflight] Running pre-flight checks
	I0507 11:14:26.224106   11681 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0507 11:14:26.224157   11681 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0507 11:14:26.224335   11681 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0507 11:14:26.274159   11681 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0507 11:14:26.278327   11681 out.go:204]   - Generating certificates and keys ...
	I0507 11:14:26.278360   11681 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0507 11:14:26.278397   11681 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0507 11:14:26.278437   11681 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0507 11:14:26.278501   11681 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0507 11:14:26.278540   11681 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0507 11:14:26.278575   11681 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0507 11:14:26.278606   11681 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0507 11:14:26.278657   11681 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0507 11:14:26.278696   11681 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0507 11:14:26.278730   11681 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0507 11:14:26.278747   11681 kubeadm.go:309] [certs] Using the existing "sa" key
	I0507 11:14:26.278770   11681 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0507 11:14:26.432583   11681 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0507 11:14:26.542704   11681 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0507 11:14:26.636874   11681 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0507 11:14:26.714705   11681 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0507 11:14:26.743434   11681 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 11:14:26.744236   11681 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 11:14:26.744258   11681 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0507 11:14:26.827669   11681 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0507 11:14:25.784022   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:25.784047   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:26.831731   11681 out.go:204]   - Booting up control plane ...
	I0507 11:14:26.831780   11681 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0507 11:14:26.831843   11681 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0507 11:14:26.831900   11681 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0507 11:14:26.831965   11681 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0507 11:14:26.832153   11681 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0507 11:14:30.832263   11681 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.001819 seconds
	I0507 11:14:30.832321   11681 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0507 11:14:30.836115   11681 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0507 11:14:31.344574   11681 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0507 11:14:31.344684   11681 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-776000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0507 11:14:31.848095   11681 kubeadm.go:309] [bootstrap-token] Using token: bxylxf.1yjazcthzjr0b14w
	I0507 11:14:31.853843   11681 out.go:204]   - Configuring RBAC rules ...
	I0507 11:14:31.853906   11681 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0507 11:14:31.853955   11681 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0507 11:14:31.855869   11681 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0507 11:14:31.860500   11681 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0507 11:14:31.861672   11681 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0507 11:14:31.862432   11681 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0507 11:14:31.865462   11681 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0507 11:14:32.052551   11681 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0507 11:14:32.252414   11681 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0507 11:14:32.252919   11681 kubeadm.go:309] 
	I0507 11:14:32.252951   11681 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0507 11:14:32.252956   11681 kubeadm.go:309] 
	I0507 11:14:32.252989   11681 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0507 11:14:32.252992   11681 kubeadm.go:309] 
	I0507 11:14:32.253003   11681 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0507 11:14:32.253060   11681 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0507 11:14:32.253090   11681 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0507 11:14:32.253093   11681 kubeadm.go:309] 
	I0507 11:14:32.253128   11681 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0507 11:14:32.253134   11681 kubeadm.go:309] 
	I0507 11:14:32.253170   11681 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0507 11:14:32.253191   11681 kubeadm.go:309] 
	I0507 11:14:32.253218   11681 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0507 11:14:32.253253   11681 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0507 11:14:32.253362   11681 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0507 11:14:32.253378   11681 kubeadm.go:309] 
	I0507 11:14:32.253455   11681 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0507 11:14:32.253498   11681 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0507 11:14:32.253501   11681 kubeadm.go:309] 
	I0507 11:14:32.253545   11681 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bxylxf.1yjazcthzjr0b14w \
	I0507 11:14:32.253596   11681 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f29a4f73b331d4cf393206b18522ccd48b2106136bcb0164e83081b123d8ccc \
	I0507 11:14:32.253609   11681 kubeadm.go:309] 	--control-plane 
	I0507 11:14:32.253611   11681 kubeadm.go:309] 
	I0507 11:14:32.253686   11681 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0507 11:14:32.253693   11681 kubeadm.go:309] 
	I0507 11:14:32.253734   11681 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bxylxf.1yjazcthzjr0b14w \
	I0507 11:14:32.253811   11681 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f29a4f73b331d4cf393206b18522ccd48b2106136bcb0164e83081b123d8ccc 
	I0507 11:14:32.253903   11681 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0507 11:14:32.253913   11681 cni.go:84] Creating CNI manager for ""
	I0507 11:14:32.253923   11681 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:14:32.258222   11681 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0507 11:14:32.265219   11681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0507 11:14:32.268219   11681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0507 11:14:32.273694   11681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0507 11:14:32.273777   11681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-776000 minikube.k8s.io/updated_at=2024_05_07T11_14_32_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=running-upgrade-776000 minikube.k8s.io/primary=true
	I0507 11:14:32.273778   11681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 11:14:32.277202   11681 ops.go:34] apiserver oom_adj: -16
	I0507 11:14:32.307154   11681 kubeadm.go:1107] duration metric: took 33.413375ms to wait for elevateKubeSystemPrivileges
	W0507 11:14:32.325456   11681 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0507 11:14:32.325466   11681 kubeadm.go:393] duration metric: took 4m11.372901417s to StartCluster
	I0507 11:14:32.325476   11681 settings.go:142] acquiring lock: {Name:mk50bfcfedcd3b99aacdbeb1994dffd265fa3e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:14:32.325652   11681 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:14:32.326040   11681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/kubeconfig: {Name:mk2a7794036857fd378216b160722b418b125ac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:14:32.326261   11681 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:14:32.330199   11681 out.go:177] * Verifying Kubernetes components...
	I0507 11:14:32.326348   11681 config.go:182] Loaded profile config "running-upgrade-776000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:14:32.326332   11681 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0507 11:14:32.338203   11681 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-776000"
	I0507 11:14:32.338218   11681 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-776000"
	W0507 11:14:32.338221   11681 addons.go:243] addon storage-provisioner should already be in state true
	I0507 11:14:32.338234   11681 host.go:66] Checking if "running-upgrade-776000" exists ...
	I0507 11:14:32.338264   11681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:14:32.338281   11681 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-776000"
	I0507 11:14:32.338292   11681 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-776000"
	I0507 11:14:32.339271   11681 kapi.go:59] client config for running-upgrade-776000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/running-upgrade-776000/client.key", CAFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101d4bd80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 11:14:32.339391   11681 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-776000"
	W0507 11:14:32.339396   11681 addons.go:243] addon default-storageclass should already be in state true
	I0507 11:14:32.339404   11681 host.go:66] Checking if "running-upgrade-776000" exists ...
	I0507 11:14:32.343119   11681 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:14:30.785289   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:30.785336   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:32.346188   11681 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 11:14:32.346195   11681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0507 11:14:32.346200   11681 sshutil.go:53] new ssh client: &{IP:localhost Port:51232 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/running-upgrade-776000/id_rsa Username:docker}
	I0507 11:14:32.346719   11681 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0507 11:14:32.346725   11681 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0507 11:14:32.346729   11681 sshutil.go:53] new ssh client: &{IP:localhost Port:51232 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/running-upgrade-776000/id_rsa Username:docker}
	I0507 11:14:32.430304   11681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 11:14:32.435434   11681 api_server.go:52] waiting for apiserver process to appear ...
	I0507 11:14:32.435479   11681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:14:32.439158   11681 api_server.go:72] duration metric: took 112.885417ms to wait for apiserver process to appear ...
	I0507 11:14:32.439167   11681 api_server.go:88] waiting for apiserver healthz status ...
	I0507 11:14:32.439174   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:32.445094   11681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0507 11:14:32.515859   11681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 11:14:35.786812   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:35.786856   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:37.441286   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:37.441351   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:40.789113   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:40.789150   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:42.441695   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:42.441723   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:45.790463   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:45.790487   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:47.441984   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:47.442024   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:50.792597   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:50.792817   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:14:50.810386   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:14:50.810470   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:14:50.823047   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:14:50.823123   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:14:50.834526   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:14:50.834598   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:14:50.845182   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:14:50.845257   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:14:50.855174   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:14:50.855241   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:14:50.865982   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:14:50.866061   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:14:50.875949   11892 logs.go:276] 0 containers: []
	W0507 11:14:50.875964   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:14:50.876032   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:14:50.889027   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:14:50.889044   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:14:50.889050   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:14:50.903176   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:14:50.903186   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:14:50.945531   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:14:50.945545   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:14:50.957086   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:14:50.957096   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:14:50.981150   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:14:50.981159   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:14:50.985137   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:14:50.985147   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:14:50.996582   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:14:50.996595   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:14:51.007570   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:14:51.007581   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:14:51.110388   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:14:51.110402   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:14:51.121870   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:14:51.121882   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:14:51.135822   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:14:51.135836   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:14:51.147618   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:14:51.147630   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:14:51.165738   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:14:51.165750   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:14:51.178488   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:14:51.178501   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:14:51.216603   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:14:51.216616   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:14:51.230377   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:14:51.230389   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:14:51.245445   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:14:51.245461   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:14:53.763002   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:52.442861   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:52.442885   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:58.765222   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:58.765309   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:14:58.779482   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:14:58.779584   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:14:58.790419   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:14:58.790484   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:14:58.800523   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:14:58.800593   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:14:58.811158   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:14:58.811229   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:14:58.821492   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:14:58.821562   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:14:58.831906   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:14:58.831973   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:14:58.842304   11892 logs.go:276] 0 containers: []
	W0507 11:14:58.842316   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:14:58.842375   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:14:58.855314   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:14:58.855341   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:14:58.855348   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:14:58.859938   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:14:58.859944   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:14:58.873515   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:14:58.873526   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:14:58.911864   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:14:58.911877   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:14:58.926055   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:14:58.926065   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:14:58.937374   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:14:58.937387   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:14:58.951887   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:14:58.951896   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:14:58.964059   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:14:58.964070   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:14:59.000778   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:14:59.000788   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:14:59.039269   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:14:59.039280   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:14:59.058025   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:14:59.058036   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:14:59.071750   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:14:59.071762   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:14:59.082910   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:14:59.082922   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:14:59.108090   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:14:59.108096   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:14:59.125452   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:14:59.125462   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:14:59.136148   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:14:59.136161   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:14:59.153537   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:14:59.153551   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:14:57.443449   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:57.443476   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:02.444248   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:02.444299   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0507 11:15:02.788430   11681 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0507 11:15:02.792759   11681 out.go:177] * Enabled addons: storage-provisioner
	I0507 11:15:01.667525   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:02.805772   11681 addons.go:505] duration metric: took 30.479836417s for enable addons: enabled=[storage-provisioner]
	I0507 11:15:06.669731   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:06.669984   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:06.686934   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:06.687021   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:06.699703   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:06.699773   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:06.711095   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:06.711156   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:06.721479   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:06.721546   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:06.732061   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:06.732132   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:06.743026   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:06.743091   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:06.754295   11892 logs.go:276] 0 containers: []
	W0507 11:15:06.754307   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:06.754365   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:06.765708   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:06.765725   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:06.765732   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:06.803164   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:06.803177   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:06.815204   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:06.815214   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:06.834362   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:06.834373   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:06.845738   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:06.845750   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:06.880000   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:06.880014   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:06.891631   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:06.891641   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:06.903305   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:06.903318   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:06.917073   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:06.917086   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:06.932348   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:06.932359   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:06.944051   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:06.944065   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:06.968436   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:06.968447   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:06.980069   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:06.980082   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:06.994446   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:06.994458   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:06.998528   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:06.998536   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:07.013103   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:07.013116   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:07.026343   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:07.026352   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:09.564554   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:07.445290   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:07.445329   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:14.566659   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:14.566771   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:14.579134   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:14.579205   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:14.592285   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:14.592356   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:14.602991   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:14.603059   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:14.614872   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:14.614939   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:14.624773   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:14.624841   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:14.635262   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:14.635339   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:14.645478   11892 logs.go:276] 0 containers: []
	W0507 11:15:14.645490   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:14.645546   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:14.656150   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:14.656166   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:14.656170   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:12.446565   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:12.446615   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:14.670364   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:14.673370   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:14.687683   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:14.687693   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:14.725797   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:14.725809   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:14.764097   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:14.764107   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:14.779196   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:14.779207   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:14.797434   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:14.797448   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:14.822891   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:14.822899   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:14.834378   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:14.834389   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:14.838398   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:14.838406   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:14.852450   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:14.852459   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:14.867348   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:14.867363   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:14.879070   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:14.879081   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:14.890368   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:14.890379   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:14.926878   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:14.926889   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:14.944796   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:14.944807   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:14.957790   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:14.957804   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:17.469735   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:17.448223   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:17.448255   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:22.471829   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:22.471909   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:22.485008   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:22.485077   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:22.503907   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:22.503979   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:22.518308   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:22.518378   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:22.528316   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:22.528382   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:22.538701   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:22.538773   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:22.549135   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:22.549208   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:22.559354   11892 logs.go:276] 0 containers: []
	W0507 11:15:22.559367   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:22.559426   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:22.569887   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:22.569905   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:22.569910   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:22.589593   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:22.589608   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:22.606725   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:22.606736   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:22.617766   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:22.617777   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:22.655386   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:22.655402   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:22.659987   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:22.659996   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:22.673888   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:22.673904   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:22.685459   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:22.685471   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:22.699011   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:22.699022   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:22.722628   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:22.722639   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:22.736751   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:22.736761   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:22.751772   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:22.751782   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:22.764894   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:22.764907   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:22.780960   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:22.780969   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:22.792576   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:22.792589   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:22.826573   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:22.826582   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:22.864152   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:22.864163   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:22.449976   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:22.450019   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:25.379919   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:27.450353   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:27.450399   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:30.382337   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:30.382573   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:30.405208   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:30.405301   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:30.422750   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:30.422826   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:30.435229   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:30.435296   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:30.446136   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:30.446200   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:30.456570   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:30.456639   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:30.468362   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:30.468432   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:30.479021   11892 logs.go:276] 0 containers: []
	W0507 11:15:30.479033   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:30.479087   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:30.493228   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:30.493246   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:30.493252   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:30.530050   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:30.530060   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:30.534299   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:30.534308   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:30.553620   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:30.553630   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:30.568076   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:30.568087   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:30.579951   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:30.579962   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:30.598039   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:30.598049   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:30.613971   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:30.613982   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:30.638350   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:30.638357   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:30.652164   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:30.652175   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:30.666182   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:30.666192   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:30.677772   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:30.677784   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:30.689147   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:30.689159   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:30.700696   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:30.700706   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:30.738787   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:30.738794   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:30.772987   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:30.773000   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:30.787823   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:30.787836   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:33.301511   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:32.452569   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:32.452729   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:32.463290   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:15:32.463362   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:32.475133   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:15:32.475209   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:32.485618   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:15:32.485693   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:32.495995   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:15:32.496069   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:32.506802   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:15:32.506873   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:32.517450   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:15:32.517520   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:32.527511   11681 logs.go:276] 0 containers: []
	W0507 11:15:32.527524   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:32.527582   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:32.537965   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:15:32.537980   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:32.537986   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:32.562156   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:15:32.562168   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:32.579744   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:15:32.579758   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:15:32.591200   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:32.591211   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:32.596281   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:32.596289   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:32.633295   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:15:32.633307   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:15:32.649191   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:15:32.649201   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:15:32.662917   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:15:32.662928   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:15:32.674653   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:15:32.674663   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:15:32.686371   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:15:32.686385   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:15:32.702296   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:32.702309   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:32.737776   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:15:32.737787   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:15:32.755726   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:15:32.755739   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:15:35.269694   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:38.303590   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:38.303786   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:38.319444   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:38.319529   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:38.331435   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:38.331519   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:38.346176   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:38.346245   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:38.356898   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:38.356966   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:38.370083   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:38.370154   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:38.380681   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:38.380756   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:38.396047   11892 logs.go:276] 0 containers: []
	W0507 11:15:38.396058   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:38.396119   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:38.408490   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:38.408508   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:38.408514   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:38.422377   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:38.422391   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:38.458418   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:38.458428   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:38.478292   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:38.478304   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:38.492769   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:38.492779   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:38.512706   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:38.512720   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:38.523904   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:38.523916   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:38.535956   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:38.535969   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:38.573351   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:38.573365   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:38.577531   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:38.577538   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:38.610725   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:38.610736   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:38.622207   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:38.622216   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:38.640272   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:38.640286   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:38.652200   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:38.652213   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:38.663412   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:38.663424   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:38.676069   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:38.676079   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:38.691244   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:38.691254   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:40.270446   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:40.270583   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:40.284336   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:15:40.284412   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:40.294850   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:15:40.294921   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:40.305234   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:15:40.305299   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:40.315529   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:15:40.315600   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:40.326866   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:15:40.326934   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:40.336974   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:15:40.337046   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:40.347565   11681 logs.go:276] 0 containers: []
	W0507 11:15:40.347580   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:40.347636   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:40.358722   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:15:40.358736   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:15:40.358741   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:15:40.376280   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:40.376291   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:40.401803   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:15:40.401812   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:40.413388   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:15:40.413400   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:15:40.424938   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:15:40.424951   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:15:40.443540   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:15:40.443553   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:15:40.455680   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:15:40.455691   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:15:40.469858   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:15:40.469867   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:15:40.483995   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:15:40.484009   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:15:40.495680   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:15:40.495691   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:15:40.507987   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:40.507998   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:40.542924   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:40.542935   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:40.548060   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:40.548068   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:41.216658   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:43.087436   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:46.218856   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:46.218988   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:46.231026   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:46.231111   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:46.241861   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:46.241931   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:46.252619   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:46.252687   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:46.263177   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:46.263240   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:46.273902   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:46.273973   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:46.284964   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:46.285027   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:46.295353   11892 logs.go:276] 0 containers: []
	W0507 11:15:46.295366   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:46.295424   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:46.305933   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:46.305949   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:46.305955   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:46.344297   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:46.344308   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:46.364951   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:46.364962   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:46.379960   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:46.379971   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:46.398053   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:46.398065   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:46.417284   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:46.417295   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:46.455796   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:46.455808   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:46.460266   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:46.460273   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:46.473885   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:46.473898   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:46.490404   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:46.490414   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:46.503379   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:46.503390   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:46.541872   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:46.541887   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:46.559524   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:46.559539   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:46.572111   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:46.572122   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:46.595908   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:46.595924   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:46.610893   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:46.610905   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:46.622822   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:46.622837   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:49.139818   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:48.089929   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:48.090267   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:48.122230   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:15:48.122346   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:48.141657   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:15:48.141740   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:48.154811   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:15:48.154886   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:48.166600   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:15:48.166664   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:48.177344   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:15:48.177424   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:48.187673   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:15:48.187739   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:48.198187   11681 logs.go:276] 0 containers: []
	W0507 11:15:48.198199   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:48.198262   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:48.208713   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:15:48.208728   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:15:48.208733   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:15:48.220239   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:15:48.220249   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:15:48.242931   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:48.242942   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:48.278317   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:48.278325   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:48.313738   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:15:48.313753   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:15:48.327749   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:15:48.327761   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:15:48.341715   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:15:48.341726   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:15:48.353454   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:15:48.353464   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:15:48.364777   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:15:48.364789   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:15:48.376223   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:48.376233   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:48.380997   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:15:48.381006   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:15:48.395771   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:48.395781   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:48.420811   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:15:48.420822   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:50.934538   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:54.141846   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:54.142134   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:54.168145   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:54.168267   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:54.185268   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:54.185351   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:54.199166   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:54.199248   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:54.215138   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:54.215209   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:54.225787   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:54.225856   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:54.239222   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:54.239290   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:54.251959   11892 logs.go:276] 0 containers: []
	W0507 11:15:54.251972   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:54.252033   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:54.269781   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:54.269799   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:54.269805   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:54.304532   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:54.304543   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:54.341403   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:54.341415   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:54.355807   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:54.355818   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:54.379048   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:54.379060   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:54.391282   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:54.391294   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:54.431576   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:54.431590   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:54.443351   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:54.443364   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:54.456661   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:54.456672   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:54.468170   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:54.468183   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:54.479996   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:54.480008   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:54.503569   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:54.503577   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:54.514351   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:54.514362   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:54.528760   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:54.528774   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:54.564969   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:54.564980   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:54.579608   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:54.579619   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:54.584250   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:54.584258   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:55.936697   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:55.936846   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:55.950215   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:15:55.950294   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:55.962247   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:15:55.962319   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:55.972497   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:15:55.972556   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:55.983038   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:15:55.983108   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:55.993900   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:15:55.993970   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:56.004298   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:15:56.004355   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:57.101098   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:56.014648   11681 logs.go:276] 0 containers: []
	W0507 11:15:56.014658   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:56.014710   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:56.025400   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:15:56.025416   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:56.025421   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:56.030160   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:15:56.030167   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:15:56.041404   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:15:56.041418   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:56.053644   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:15:56.053656   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:15:56.067767   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:15:56.067778   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:15:56.081832   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:15:56.081844   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:15:56.096016   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:15:56.096028   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:15:56.107414   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:15:56.107426   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:15:56.125398   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:56.125412   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:56.160474   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:56.160486   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:56.194574   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:15:56.194587   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:15:56.212177   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:15:56.212190   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:15:56.223482   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:56.223495   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:58.749358   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:02.103389   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:02.103629   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:02.127822   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:02.127925   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:02.142082   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:02.142158   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:02.153844   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:02.153915   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:02.164347   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:02.164415   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:02.174625   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:02.174692   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:02.185223   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:02.185292   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:02.198459   11892 logs.go:276] 0 containers: []
	W0507 11:16:02.198472   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:02.198528   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:02.209328   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:02.209346   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:02.209351   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:02.221519   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:02.221530   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:02.237773   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:02.237783   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:02.248590   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:02.248603   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:02.272612   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:02.272620   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:02.312216   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:02.312231   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:02.331058   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:02.331070   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:02.342001   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:02.342013   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:02.354172   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:02.354186   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:02.371341   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:02.371351   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:02.384623   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:02.384633   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:02.396345   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:02.396357   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:02.433352   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:02.433363   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:02.467072   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:02.467082   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:02.482673   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:02.482682   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:02.496436   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:02.496447   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:02.500526   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:02.500534   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:03.749533   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:03.749729   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:03.766300   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:03.766386   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:03.779374   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:03.779448   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:03.790785   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:16:03.790855   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:03.802155   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:03.802224   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:03.813012   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:03.813093   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:03.823937   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:03.824004   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:03.833722   11681 logs.go:276] 0 containers: []
	W0507 11:16:03.833738   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:03.833793   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:03.844818   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:03.844833   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:03.844838   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:03.860129   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:03.860147   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:03.872313   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:03.872324   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:03.889872   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:03.889884   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:03.901591   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:03.901602   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:03.913666   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:03.913677   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:03.952116   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:03.952124   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:03.966407   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:03.966419   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:03.980130   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:03.980146   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:03.992084   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:03.992096   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:04.015215   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:04.015225   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:04.019514   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:04.019521   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:04.053188   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:04.053201   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:05.017057   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:06.567326   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:10.019165   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:10.019277   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:10.030460   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:10.030541   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:10.041073   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:10.041136   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:10.051734   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:10.051810   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:10.062120   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:10.062182   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:10.074885   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:10.074967   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:10.086534   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:10.086628   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:10.096877   11892 logs.go:276] 0 containers: []
	W0507 11:16:10.096888   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:10.096944   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:10.107078   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:10.107097   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:10.107102   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:10.126764   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:10.126776   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:10.162387   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:10.162400   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:10.176357   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:10.176368   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:10.213844   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:10.213854   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:10.225022   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:10.225035   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:10.237173   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:10.237184   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:10.251628   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:10.251638   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:10.266418   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:10.266432   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:10.279764   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:10.279778   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:10.291105   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:10.291117   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:10.329014   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:10.329030   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:10.346169   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:10.346179   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:10.371280   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:10.371288   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:10.375352   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:10.375360   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:10.391516   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:10.391526   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:10.405576   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:10.405587   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:12.918664   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:11.569806   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:11.570071   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:11.595859   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:11.595977   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:11.614226   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:11.614298   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:11.627913   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:16:11.627984   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:11.640515   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:11.640584   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:11.655631   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:11.655706   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:11.666354   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:11.666428   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:11.676820   11681 logs.go:276] 0 containers: []
	W0507 11:16:11.676832   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:11.676889   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:11.686946   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:11.686960   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:11.686967   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:11.720793   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:11.720804   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:11.725910   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:11.725919   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:11.743835   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:11.743845   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:11.762824   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:11.762836   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:11.777006   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:11.777019   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:11.789765   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:11.789776   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:11.801026   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:11.801037   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:11.835195   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:11.835208   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:11.849335   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:11.849347   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:11.861723   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:11.861734   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:11.873386   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:11.873396   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:11.891254   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:11.891264   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:14.417355   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:17.920753   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:17.920918   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:17.934381   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:17.934467   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:17.946003   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:17.946080   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:17.956064   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:17.956139   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:17.966617   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:17.966689   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:17.977506   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:17.977571   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:17.987908   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:17.987973   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:18.002773   11892 logs.go:276] 0 containers: []
	W0507 11:16:18.002786   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:18.002845   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:18.012961   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:18.012979   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:18.012984   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:18.024451   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:18.024460   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:18.039129   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:18.039139   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:18.050673   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:18.050682   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:18.062057   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:18.062066   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:18.074030   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:18.074042   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:18.111738   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:18.111747   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:18.125229   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:18.125239   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:18.167941   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:18.167953   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:18.182099   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:18.182108   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:18.220517   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:18.220527   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:18.232300   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:18.232313   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:18.257188   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:18.257199   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:18.271246   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:18.271258   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:18.275726   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:18.275733   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:18.290266   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:18.290276   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:18.301245   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:18.301256   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:19.419580   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:19.419770   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:19.442603   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:19.442691   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:19.458053   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:19.458121   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:19.470282   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:16:19.470342   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:19.481933   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:19.482003   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:19.492531   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:19.492600   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:19.503051   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:19.503112   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:19.516104   11681 logs.go:276] 0 containers: []
	W0507 11:16:19.516114   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:19.516172   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:19.526129   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:19.526145   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:19.526151   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:19.559946   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:19.559962   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:19.574371   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:19.574383   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:19.585731   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:19.585743   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:19.597949   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:19.597959   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:19.609152   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:19.609163   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:19.621143   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:19.621154   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:19.645572   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:19.645580   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:19.680688   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:19.680698   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:19.685496   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:19.685504   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:19.699108   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:19.699118   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:19.710791   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:19.710804   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:19.725386   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:19.725398   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:20.820584   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:22.245456   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:25.822768   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:25.822981   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:25.847967   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:25.848056   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:25.862967   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:25.863041   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:25.873759   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:25.873827   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:25.884192   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:25.884263   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:25.894458   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:25.894525   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:25.904828   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:25.904897   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:25.914573   11892 logs.go:276] 0 containers: []
	W0507 11:16:25.914585   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:25.914638   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:25.924943   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:25.924964   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:25.924970   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:25.936365   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:25.936377   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:25.949726   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:25.949737   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:25.987701   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:25.987711   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:26.002999   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:26.003011   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:26.015011   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:26.015023   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:26.019525   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:26.019532   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:26.033848   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:26.033858   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:26.049512   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:26.049524   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:26.073646   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:26.073653   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:26.086080   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:26.086091   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:26.121398   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:26.121413   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:26.133253   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:26.133265   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:26.150504   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:26.150514   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:26.162629   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:26.162638   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:26.199179   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:26.199188   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:26.213863   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:26.213872   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:28.729630   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:27.247650   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:27.247775   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:27.260083   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:27.260157   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:27.271563   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:27.271633   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:27.286019   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:16:27.286094   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:27.296520   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:27.296593   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:27.308112   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:27.308178   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:27.318550   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:27.318624   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:27.328451   11681 logs.go:276] 0 containers: []
	W0507 11:16:27.328461   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:27.328522   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:27.339117   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:27.339133   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:27.339139   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:27.373418   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:27.373429   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:27.387444   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:27.387454   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:27.399442   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:27.399454   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:27.411089   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:27.411102   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:27.434009   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:27.434016   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:27.445224   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:27.445234   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:27.479179   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:27.479189   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:27.483790   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:27.483799   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:27.502574   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:27.502587   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:27.514097   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:27.514109   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:27.530901   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:27.530913   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:27.548754   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:27.548771   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:30.072780   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:33.731901   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:33.732287   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:33.761552   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:33.761681   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:33.779264   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:33.779359   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:33.793478   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:33.793554   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:33.812463   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:33.812537   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:33.823020   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:33.823094   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:33.833806   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:33.833883   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:33.845270   11892 logs.go:276] 0 containers: []
	W0507 11:16:33.845282   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:33.845343   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:33.860712   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:33.860733   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:33.860739   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:33.878714   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:33.878727   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:33.890596   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:33.890609   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:33.902924   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:33.902935   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:33.925782   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:33.925788   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:33.961384   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:33.961391   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:33.974699   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:33.974710   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:33.985896   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:33.985908   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:33.997396   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:33.997407   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:34.009558   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:34.009569   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:34.022853   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:34.022863   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:34.027173   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:34.027182   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:34.060661   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:34.060672   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:34.103955   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:34.103967   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:34.117367   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:34.117377   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:34.136161   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:34.136175   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:34.148632   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:34.148642   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:35.074856   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:35.074968   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:35.088573   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:35.088641   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:35.099273   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:35.099343   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:35.109466   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:16:35.109537   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:35.122377   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:35.122445   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:35.132796   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:35.132860   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:35.142956   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:35.143022   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:35.153313   11681 logs.go:276] 0 containers: []
	W0507 11:16:35.153327   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:35.153382   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:35.163300   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:35.163313   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:35.163319   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:35.198788   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:35.198802   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:35.213211   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:35.213224   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:35.227068   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:35.227081   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:35.239172   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:35.239181   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:35.251511   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:35.251522   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:35.262785   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:35.262798   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:35.286255   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:35.286263   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:35.318866   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:35.318874   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:35.322999   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:35.323006   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:35.334569   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:35.334579   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:35.349293   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:35.349302   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:35.366736   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:35.366749   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:36.665581   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:37.880271   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:41.667856   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:41.668328   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:41.709811   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:41.709961   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:41.730683   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:41.730787   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:41.745408   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:41.745487   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:41.757773   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:41.757847   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:41.768280   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:41.768350   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:41.779704   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:41.779781   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:41.789970   11892 logs.go:276] 0 containers: []
	W0507 11:16:41.789985   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:41.790047   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:41.800499   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:41.800517   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:41.800522   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:41.812138   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:41.812151   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:41.829050   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:41.829059   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:41.841819   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:41.841832   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:41.856140   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:41.856155   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:41.895584   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:41.895597   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:41.911460   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:41.911472   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:41.923080   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:41.923093   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:41.937902   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:41.937913   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:41.949148   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:41.949160   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:41.972770   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:41.972778   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:41.984227   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:41.984237   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:41.997433   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:41.997446   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:42.011426   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:42.011436   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:42.023162   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:42.023177   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:42.061732   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:42.061742   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:42.066530   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:42.066538   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:44.604353   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:42.882574   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:42.882799   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:42.904980   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:42.905084   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:42.919461   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:42.919538   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:42.930381   11681 logs.go:276] 2 containers: [db71e48abae6 563be709b2f4]
	I0507 11:16:42.930448   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:42.940305   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:42.940373   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:42.951010   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:42.951084   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:42.961308   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:42.961377   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:42.971247   11681 logs.go:276] 0 containers: []
	W0507 11:16:42.971257   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:42.971313   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:42.981754   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:42.981769   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:42.981774   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:42.986217   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:42.986242   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:43.023730   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:43.023743   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:43.038828   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:43.038838   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:43.050171   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:43.050181   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:43.062158   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:43.062172   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:43.076779   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:43.076789   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:43.095456   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:43.095468   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:43.130577   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:43.130587   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:43.142163   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:43.142176   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:43.167332   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:43.167339   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:43.178648   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:43.178661   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:43.196061   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:43.196071   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:45.713870   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:49.606790   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:49.607072   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:49.634250   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:49.634357   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:49.650454   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:49.650536   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:49.663932   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:49.664010   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:50.716111   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:50.716330   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:50.742638   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:50.742759   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:50.760249   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:50.760334   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:50.773520   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:16:50.773593   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:50.785229   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:50.785291   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:50.795385   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:50.795451   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:50.805963   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:50.806040   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:50.815897   11681 logs.go:276] 0 containers: []
	W0507 11:16:50.815908   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:50.815964   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:50.826530   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:50.826546   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:50.826551   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:16:50.838056   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:50.838069   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:50.871981   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:50.871997   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:50.876773   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:50.876780   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:50.889264   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:50.889276   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:50.905132   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:50.905144   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:50.930452   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:16:50.930464   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:16:50.947388   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:16:50.947400   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:16:50.959607   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:50.959617   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:50.996093   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:50.996107   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:49.674618   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:49.674684   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:49.685131   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:49.685187   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:49.695388   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:49.695457   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:49.705514   11892 logs.go:276] 0 containers: []
	W0507 11:16:49.705523   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:49.705575   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:49.716033   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:49.716051   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:49.716057   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:49.720306   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:49.720315   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:49.755008   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:49.755022   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:49.792546   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:49.792557   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:49.817768   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:49.817782   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:49.837600   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:49.837611   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:49.850212   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:49.850223   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:49.862775   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:49.862785   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:49.876214   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:49.876225   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:49.891355   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:49.891364   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:49.915050   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:49.915058   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:49.928706   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:49.928718   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:49.968931   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:49.968942   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:49.983239   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:49.983249   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:50.006983   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:50.006995   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:50.020450   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:50.020462   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:50.031933   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:50.031946   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:52.545708   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:51.011162   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:51.011173   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:51.023064   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:51.023074   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:51.040404   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:51.040414   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:51.052265   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:51.052275   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:51.066874   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:51.066891   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:53.582125   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:57.547919   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:57.548128   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:57.562951   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:57.563032   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:57.574731   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:57.574804   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:57.585611   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:57.585674   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:57.596452   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:57.596525   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:57.606956   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:57.607028   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:57.622183   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:57.622257   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:57.632042   11892 logs.go:276] 0 containers: []
	W0507 11:16:57.632055   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:57.632116   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:57.642396   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:57.642414   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:57.642420   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:57.653941   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:57.653954   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:57.676688   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:57.676696   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:57.689317   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:57.689330   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:57.707489   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:57.707505   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:57.719271   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:57.719281   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:57.730523   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:57.730536   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:57.745118   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:57.745128   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:57.759998   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:57.760009   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:57.772245   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:57.772256   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:57.785733   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:57.785743   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:57.790471   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:57.790481   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:57.824654   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:57.824668   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:57.862899   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:57.862914   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:57.874814   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:57.874827   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:57.913174   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:57.913182   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:57.927796   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:57.927807   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:58.584385   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:58.584589   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:58.613329   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:16:58.613447   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:58.630453   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:16:58.630547   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:58.644183   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:16:58.644258   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:58.654925   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:16:58.654994   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:58.665601   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:16:58.665659   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:58.675830   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:16:58.675896   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:58.688713   11681 logs.go:276] 0 containers: []
	W0507 11:16:58.688723   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:58.688783   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:58.699195   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:16:58.699212   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:58.699217   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:58.733519   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:58.733526   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:58.757252   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:16:58.757258   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:16:58.768487   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:16:58.768500   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:16:58.780256   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:16:58.780266   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:16:58.798114   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:16:58.798123   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:58.810448   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:58.810461   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:58.814867   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:16:58.814874   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:16:58.829164   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:16:58.829173   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:16:58.841115   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:16:58.841126   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:16:58.852669   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:58.852678   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:58.888072   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:16:58.888082   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:16:58.902640   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:16:58.902651   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:16:58.914192   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:16:58.914205   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:16:58.929480   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:16:58.929489   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:00.443172   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:01.441690   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:05.445360   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:05.445571   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:05.464959   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:17:05.465057   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:05.480599   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:17:05.480678   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:05.492193   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:17:05.492266   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:05.502720   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:17:05.502796   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:05.513255   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:17:05.513322   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:05.523898   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:17:05.523972   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:05.534028   11892 logs.go:276] 0 containers: []
	W0507 11:17:05.534041   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:05.534098   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:05.544508   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:17:05.544526   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:05.544531   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:05.549068   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:17:05.549076   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:17:05.563480   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:17:05.563491   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:17:05.601205   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:17:05.601219   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:17:05.620313   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:17:05.620324   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:17:05.635040   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:17:05.635052   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:17:05.649063   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:05.649074   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:05.673510   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:17:05.673521   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:05.685307   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:05.685317   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:05.723811   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:05.723823   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:05.758885   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:17:05.758897   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:17:05.774147   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:17:05.774160   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:17:05.785450   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:17:05.785460   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:17:05.797422   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:17:05.797433   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:17:05.809595   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:17:05.809608   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:17:05.821368   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:17:05.821379   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:17:05.839217   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:17:05.839235   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:17:08.354431   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:06.442976   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:06.443138   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:06.458107   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:06.458175   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:06.469955   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:06.470025   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:06.480937   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:06.481002   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:06.491531   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:06.491595   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:06.502331   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:06.502398   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:06.519138   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:06.519205   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:06.533647   11681 logs.go:276] 0 containers: []
	W0507 11:17:06.533658   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:06.533711   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:06.544251   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:06.544269   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:06.544275   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:06.556445   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:06.556455   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:06.590257   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:06.590268   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:06.602366   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:06.602377   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:06.617667   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:06.617676   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:06.642797   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:06.642806   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:06.654558   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:06.654570   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:06.672188   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:06.672198   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:06.676626   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:06.676632   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:06.690232   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:06.690244   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:06.701500   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:06.701513   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:06.713000   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:06.713013   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:06.725251   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:06.725261   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:06.737089   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:06.737098   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:06.773743   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:06.773756   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:09.289590   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:13.356509   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:13.356688   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:13.370962   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:17:13.371038   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:13.382715   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:17:13.382783   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:13.393966   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:17:13.394037   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:13.406746   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:17:13.406824   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:13.417039   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:17:13.417108   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:13.427429   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:17:13.427491   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:13.437294   11892 logs.go:276] 0 containers: []
	W0507 11:17:13.437308   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:13.437367   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:13.447636   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:17:13.447653   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:17:13.447660   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:17:13.458829   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:17:13.458840   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:17:13.470844   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:17:13.470855   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:17:13.482890   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:17:13.482902   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:17:13.494286   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:13.494298   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:13.519336   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:13.519350   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:13.557173   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:17:13.557181   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:17:13.571733   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:17:13.571744   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:17:13.585653   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:17:13.585664   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:17:13.599478   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:17:13.599489   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:13.611168   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:13.611181   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:13.643952   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:17:13.643969   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:17:13.658123   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:17:13.658133   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:17:13.672444   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:17:13.672454   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:17:13.689538   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:17:13.689549   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:17:13.703603   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:13.703613   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:13.707484   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:17:13.707490   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:17:14.291789   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:14.291903   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:14.303572   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:14.303635   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:14.314008   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:14.314068   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:14.324644   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:14.324717   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:14.335845   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:14.335921   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:14.346390   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:14.346459   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:14.359622   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:14.359684   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:14.369701   11681 logs.go:276] 0 containers: []
	W0507 11:17:14.369711   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:14.369764   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:14.380047   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:14.380063   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:14.380068   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:14.391511   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:14.391521   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:14.406647   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:14.406661   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:14.428799   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:14.428809   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:14.454197   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:14.454207   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:14.469244   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:14.469255   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:14.480680   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:14.480691   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:14.492858   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:14.492870   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:14.504918   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:14.504928   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:14.539670   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:14.539679   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:14.544029   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:14.544037   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:14.585127   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:14.585137   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:14.597423   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:14.597435   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:14.608740   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:14.608749   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:14.622357   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:14.622367   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:16.246967   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:17.141058   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:21.248995   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:21.249094   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:21.265767   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:17:21.265841   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:21.279172   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:17:21.279249   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:21.289686   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:17:21.289750   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:21.299782   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:17:21.299858   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:21.310138   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:17:21.310213   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:21.321008   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:17:21.321075   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:21.331465   11892 logs.go:276] 0 containers: []
	W0507 11:17:21.331477   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:21.331536   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:21.341905   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:17:21.341924   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:21.341929   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:21.346071   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:21.346079   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:21.379604   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:17:21.379616   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:17:21.394943   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:17:21.394955   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:17:21.408724   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:21.408735   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:21.446772   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:17:21.446781   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:17:21.460946   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:17:21.460956   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:17:21.472093   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:17:21.472104   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:17:21.483976   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:17:21.483989   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:17:21.500464   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:17:21.500475   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:21.512365   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:17:21.512378   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:17:21.526490   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:17:21.526503   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:17:21.539838   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:17:21.539849   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:17:21.551670   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:17:21.551680   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:17:21.565332   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:17:21.565364   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:17:21.602080   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:17:21.602091   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:17:21.613382   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:21.613393   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:24.137392   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:22.143250   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:22.143368   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:22.156509   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:22.156582   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:22.166762   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:22.166826   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:22.177675   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:22.177751   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:22.188402   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:22.188467   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:22.198605   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:22.198673   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:22.209043   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:22.209106   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:22.219330   11681 logs.go:276] 0 containers: []
	W0507 11:17:22.219343   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:22.219402   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:22.230360   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:22.230377   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:22.230383   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:22.234856   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:22.234864   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:22.249860   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:22.249873   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:22.284620   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:22.284630   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:22.298696   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:22.298706   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:22.310410   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:22.310421   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:22.326694   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:22.326704   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:22.338714   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:22.338724   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:22.363130   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:22.363136   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:22.401944   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:22.401954   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:22.419738   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:22.419748   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:22.431295   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:22.431306   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:22.443062   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:22.443072   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:22.454378   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:22.454388   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:22.465832   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:22.465842   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:24.982062   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:29.138302   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:29.138569   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:29.163015   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:17:29.163138   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:29.179281   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:17:29.179365   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:29.193525   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:17:29.193606   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:29.206575   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:17:29.206648   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:29.218472   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:17:29.218544   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:29.230076   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:17:29.230144   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:29.240161   11892 logs.go:276] 0 containers: []
	W0507 11:17:29.240172   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:29.240227   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:29.250947   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:17:29.250965   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:17:29.250971   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:17:29.267946   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:17:29.267956   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:17:29.279371   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:29.279385   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:29.301398   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:17:29.301407   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:17:29.315098   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:17:29.315112   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:17:29.328896   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:17:29.328910   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:17:29.345066   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:17:29.345077   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:17:29.359766   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:17:29.359778   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:17:29.371967   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:29.371977   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:29.408661   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:17:29.408673   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:17:29.423420   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:17:29.423431   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:17:29.435058   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:17:29.435069   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:29.446887   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:29.446900   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:29.482988   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:17:29.483001   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:17:29.519706   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:17:29.519719   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:17:29.533401   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:29.533428   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:29.538001   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:17:29.538010   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:17:29.984160   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:29.984263   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:29.999699   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:29.999776   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:30.009858   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:30.009931   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:30.020619   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:30.020692   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:30.030765   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:30.030824   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:30.041589   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:30.041660   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:30.052280   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:30.052344   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:30.062708   11681 logs.go:276] 0 containers: []
	W0507 11:17:30.062720   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:30.062780   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:30.079261   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:30.079279   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:30.079284   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:30.092820   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:30.092830   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:30.104626   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:30.104639   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:30.116238   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:30.116250   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:30.127620   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:30.127630   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:30.157524   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:30.157535   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:30.192308   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:30.192324   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:30.207449   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:30.207460   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:30.223067   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:30.223078   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:30.235695   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:30.235704   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:30.250627   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:30.250638   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:30.254925   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:30.254932   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:30.266808   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:30.266818   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:30.285424   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:30.285434   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:30.321268   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:30.321278   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:32.062247   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:32.835498   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:37.064662   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:37.065072   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:37.099858   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:17:37.099985   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:37.120837   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:17:37.120939   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:37.135739   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:17:37.135824   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:37.148633   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:17:37.148705   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:37.159574   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:17:37.159637   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:37.171580   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:17:37.171643   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:37.182124   11892 logs.go:276] 0 containers: []
	W0507 11:17:37.182136   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:37.182195   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:37.192823   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:17:37.192841   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:17:37.192846   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:17:37.204760   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:17:37.204770   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:17:37.220051   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:17:37.220063   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:17:37.234105   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:17:37.234119   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:17:37.262230   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:37.262242   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:37.311469   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:17:37.311483   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:17:37.326131   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:17:37.326143   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:17:37.343578   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:37.343592   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:37.366504   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:37.366513   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:37.404971   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:17:37.404986   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:17:37.442544   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:17:37.442559   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:37.454124   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:37.454138   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:37.458489   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:17:37.458496   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:17:37.472336   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:17:37.472351   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:17:37.483532   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:17:37.483545   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:17:37.495618   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:17:37.495629   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:17:37.513075   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:17:37.513085   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:17:37.837992   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:37.838188   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:37.857909   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:37.858005   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:37.871684   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:37.871755   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:37.884115   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:37.884186   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:37.894565   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:37.894634   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:37.904616   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:37.904689   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:37.920606   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:37.920677   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:37.930915   11681 logs.go:276] 0 containers: []
	W0507 11:17:37.930930   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:37.930993   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:37.941321   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:37.941338   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:37.941343   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:37.955416   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:37.955427   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:37.973349   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:37.973359   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:37.985077   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:37.985088   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:38.003220   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:38.003229   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:38.026649   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:38.026660   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:38.030687   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:38.030696   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:38.042062   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:38.042072   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:38.054246   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:38.054259   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:38.066421   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:38.066432   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:38.078906   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:38.078917   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:38.098346   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:38.098358   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:38.110141   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:38.110153   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:38.143253   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:38.143264   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:38.176898   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:38.176908   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:40.690175   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:40.029246   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:45.692327   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:45.692456   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:45.703540   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:45.703612   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:45.713845   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:45.713926   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:45.724575   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:45.724647   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:45.735444   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:45.735511   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:45.745892   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:45.745961   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:45.756525   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:45.756588   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:45.769750   11681 logs.go:276] 0 containers: []
	W0507 11:17:45.769760   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:45.769815   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:45.785556   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:45.785572   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:45.785579   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:45.797441   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:45.797453   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:45.814984   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:45.814995   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:45.827205   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:45.827218   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:45.841055   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:45.841067   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:45.852951   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:45.852967   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:45.868331   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:45.868342   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:45.903230   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:45.903239   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:45.922673   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:45.922685   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:45.935018   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:45.935031   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:45.958715   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:45.958724   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:45.993797   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:45.993810   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:45.031400   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:45.031599   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:45.049162   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:17:45.049268   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:45.062978   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:17:45.063044   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:45.074504   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:17:45.074573   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:45.087895   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:17:45.087962   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:45.103080   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:17:45.103149   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:45.113743   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:17:45.113808   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:45.124605   11892 logs.go:276] 0 containers: []
	W0507 11:17:45.124616   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:45.124671   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:45.135293   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:17:45.135311   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:17:45.135317   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:17:45.149143   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:17:45.149155   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:17:45.163724   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:17:45.163735   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:17:45.183435   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:17:45.183447   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:17:45.197329   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:17:45.197343   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:17:45.208814   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:17:45.208827   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:17:45.222355   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:17:45.222366   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:17:45.236160   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:17:45.236172   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:17:45.247626   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:17:45.247637   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:17:45.318711   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:17:45.318732   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:17:45.334922   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:45.334934   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:45.356295   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:45.356302   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:45.392982   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:45.392993   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:45.399456   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:17:45.399464   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:17:45.411385   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:17:45.411398   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:17:45.423111   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:17:45.423122   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:45.434643   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:45.434655   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:47.974419   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:46.007959   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:46.009579   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:46.021386   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:46.021399   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:46.025761   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:46.025769   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:48.539012   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:52.976624   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:52.976702   11892 kubeadm.go:591] duration metric: took 4m4.11202s to restartPrimaryControlPlane
	W0507 11:17:52.976769   11892 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0507 11:17:52.976802   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0507 11:17:54.057033   11892 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.080251167s)
	I0507 11:17:54.057110   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 11:17:54.062399   11892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 11:17:54.065394   11892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 11:17:54.068181   11892 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 11:17:54.068186   11892 kubeadm.go:156] found existing configuration files:
	
	I0507 11:17:54.068204   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/admin.conf
	I0507 11:17:54.070658   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0507 11:17:54.070683   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0507 11:17:54.073781   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/kubelet.conf
	I0507 11:17:54.076977   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0507 11:17:54.076999   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0507 11:17:54.079814   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/controller-manager.conf
	I0507 11:17:54.082239   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0507 11:17:54.082264   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0507 11:17:54.085420   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/scheduler.conf
	I0507 11:17:54.088414   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0507 11:17:54.088435   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0507 11:17:54.090851   11892 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0507 11:17:54.108958   11892 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0507 11:17:54.108987   11892 kubeadm.go:309] [preflight] Running pre-flight checks
	I0507 11:17:54.162665   11892 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0507 11:17:54.162734   11892 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0507 11:17:54.162780   11892 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0507 11:17:54.211616   11892 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0507 11:17:54.215863   11892 out.go:204]   - Generating certificates and keys ...
	I0507 11:17:54.215897   11892 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0507 11:17:54.215926   11892 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0507 11:17:54.215960   11892 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0507 11:17:54.215987   11892 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0507 11:17:54.216018   11892 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0507 11:17:54.216064   11892 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0507 11:17:54.216093   11892 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0507 11:17:54.216138   11892 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0507 11:17:54.216241   11892 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0507 11:17:54.216299   11892 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0507 11:17:54.216317   11892 kubeadm.go:309] [certs] Using the existing "sa" key
	I0507 11:17:54.216348   11892 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0507 11:17:54.362365   11892 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0507 11:17:54.581324   11892 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0507 11:17:54.655785   11892 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0507 11:17:54.700386   11892 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0507 11:17:54.729363   11892 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 11:17:54.729782   11892 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 11:17:54.729855   11892 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0507 11:17:54.819454   11892 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0507 11:17:53.541103   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:53.541213   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:53.553388   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:17:53.553463   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:53.565627   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:17:53.565706   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:53.577176   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:17:53.577255   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:53.588406   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:17:53.588503   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:53.600157   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:17:53.600233   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:53.618567   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:17:53.618641   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:53.629658   11681 logs.go:276] 0 containers: []
	W0507 11:17:53.629669   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:53.629731   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:53.641244   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:17:53.641261   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:53.641266   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:53.678245   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:53.678262   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:53.714285   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:17:53.714296   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:17:53.726489   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:17:53.726500   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:17:53.744822   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:17:53.744841   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:17:53.760727   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:17:53.760739   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:17:53.776536   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:17:53.776554   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:17:53.792002   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:17:53.792014   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:17:53.804565   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:17:53.804582   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:17:53.817068   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:53.817084   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:53.841929   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:17:53.841939   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:53.853724   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:53.853736   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:53.858447   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:17:53.858453   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:17:53.874861   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:17:53.874872   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:17:53.893716   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:17:53.893727   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:17:54.823949   11892 out.go:204]   - Booting up control plane ...
	I0507 11:17:54.824007   11892 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0507 11:17:54.824048   11892 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0507 11:17:54.824092   11892 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0507 11:17:54.824138   11892 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0507 11:17:54.824257   11892 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0507 11:17:59.329139   11892 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.505597 seconds
	I0507 11:17:59.329243   11892 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0507 11:17:59.333119   11892 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0507 11:17:59.851531   11892 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0507 11:17:59.851828   11892 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-069000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0507 11:18:00.354764   11892 kubeadm.go:309] [bootstrap-token] Using token: 8r216w.1dkq7l997m0tj7pp
	I0507 11:18:00.356514   11892 out.go:204]   - Configuring RBAC rules ...
	I0507 11:18:00.356578   11892 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0507 11:18:00.364015   11892 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0507 11:18:00.365848   11892 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0507 11:18:00.366606   11892 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0507 11:18:00.367358   11892 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0507 11:18:00.368256   11892 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0507 11:18:00.371176   11892 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0507 11:18:00.554548   11892 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0507 11:18:00.766330   11892 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0507 11:18:00.766818   11892 kubeadm.go:309] 
	I0507 11:18:00.766849   11892 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0507 11:18:00.766852   11892 kubeadm.go:309] 
	I0507 11:18:00.766885   11892 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0507 11:18:00.766889   11892 kubeadm.go:309] 
	I0507 11:18:00.766901   11892 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0507 11:18:00.766926   11892 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0507 11:18:00.766953   11892 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0507 11:18:00.766957   11892 kubeadm.go:309] 
	I0507 11:18:00.766981   11892 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0507 11:18:00.766983   11892 kubeadm.go:309] 
	I0507 11:18:00.767004   11892 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0507 11:18:00.767007   11892 kubeadm.go:309] 
	I0507 11:18:00.767030   11892 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0507 11:18:00.767061   11892 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0507 11:18:00.767104   11892 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0507 11:18:00.767111   11892 kubeadm.go:309] 
	I0507 11:18:00.767155   11892 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0507 11:18:00.767207   11892 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0507 11:18:00.767213   11892 kubeadm.go:309] 
	I0507 11:18:00.767269   11892 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 8r216w.1dkq7l997m0tj7pp \
	I0507 11:18:00.767328   11892 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f29a4f73b331d4cf393206b18522ccd48b2106136bcb0164e83081b123d8ccc \
	I0507 11:18:00.767342   11892 kubeadm.go:309] 	--control-plane 
	I0507 11:18:00.767345   11892 kubeadm.go:309] 
	I0507 11:18:00.767417   11892 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0507 11:18:00.767423   11892 kubeadm.go:309] 
	I0507 11:18:00.767468   11892 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 8r216w.1dkq7l997m0tj7pp \
	I0507 11:18:00.767586   11892 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f29a4f73b331d4cf393206b18522ccd48b2106136bcb0164e83081b123d8ccc 
	I0507 11:18:00.767699   11892 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0507 11:18:00.767709   11892 cni.go:84] Creating CNI manager for ""
	I0507 11:18:00.767717   11892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:18:00.771519   11892 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0507 11:18:00.778508   11892 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0507 11:18:00.782383   11892 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0507 11:18:00.787111   11892 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0507 11:18:00.787166   11892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 11:18:00.787227   11892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-069000 minikube.k8s.io/updated_at=2024_05_07T11_18_00_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=stopped-upgrade-069000 minikube.k8s.io/primary=true
	I0507 11:18:00.792545   11892 ops.go:34] apiserver oom_adj: -16
	I0507 11:18:00.828767   11892 kubeadm.go:1107] duration metric: took 41.647709ms to wait for elevateKubeSystemPrivileges
	W0507 11:18:00.828793   11892 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0507 11:18:00.828799   11892 kubeadm.go:393] duration metric: took 4m11.977539541s to StartCluster
	I0507 11:18:00.828808   11892 settings.go:142] acquiring lock: {Name:mk50bfcfedcd3b99aacdbeb1994dffd265fa3e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:18:00.828893   11892 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:18:00.829330   11892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/kubeconfig: {Name:mk2a7794036857fd378216b160722b418b125ac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:18:00.829547   11892 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:18:00.833466   11892 out.go:177] * Verifying Kubernetes components...
	I0507 11:18:00.829554   11892 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0507 11:18:00.829640   11892 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:18:00.841473   11892 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-069000"
	I0507 11:18:00.841481   11892 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-069000"
	I0507 11:18:00.841491   11892 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-069000"
	I0507 11:18:00.841495   11892 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-069000"
	I0507 11:18:00.841476   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0507 11:18:00.841496   11892 addons.go:243] addon storage-provisioner should already be in state true
	I0507 11:18:00.841543   11892 host.go:66] Checking if "stopped-upgrade-069000" exists ...
	I0507 11:18:00.842729   11892 kapi.go:59] client config for stopped-upgrade-069000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/client.key", CAFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102317d80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 11:18:00.842859   11892 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-069000"
	W0507 11:18:00.842864   11892 addons.go:243] addon default-storageclass should already be in state true
	I0507 11:18:00.842871   11892 host.go:66] Checking if "stopped-upgrade-069000" exists ...
	I0507 11:18:00.847402   11892 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:17:56.408333   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:00.851540   11892 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 11:18:00.851548   11892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0507 11:18:00.851555   11892 sshutil.go:53] new ssh client: &{IP:localhost Port:51437 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/id_rsa Username:docker}
	I0507 11:18:00.852277   11892 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0507 11:18:00.852282   11892 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0507 11:18:00.852286   11892 sshutil.go:53] new ssh client: &{IP:localhost Port:51437 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/id_rsa Username:docker}
	I0507 11:18:00.937177   11892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 11:18:00.942340   11892 api_server.go:52] waiting for apiserver process to appear ...
	I0507 11:18:00.942383   11892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:18:00.946272   11892 api_server.go:72] duration metric: took 116.717ms to wait for apiserver process to appear ...
	I0507 11:18:00.946281   11892 api_server.go:88] waiting for apiserver healthz status ...
	I0507 11:18:00.946289   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:00.956928   11892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0507 11:18:00.958041   11892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 11:18:01.410374   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:01.410460   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:18:01.422291   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:18:01.422357   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:18:01.432766   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:18:01.432829   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:18:01.443252   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:18:01.443332   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:18:01.453830   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:18:01.453910   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:18:01.465570   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:18:01.465638   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:18:01.475851   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:18:01.475920   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:18:01.485825   11681 logs.go:276] 0 containers: []
	W0507 11:18:01.485840   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:18:01.485899   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:18:01.501193   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:18:01.501219   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:18:01.501224   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:18:01.513241   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:18:01.513255   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:18:01.525194   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:18:01.525203   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:18:01.546662   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:18:01.546672   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:18:01.570153   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:18:01.570161   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:18:01.584331   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:18:01.584341   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:18:01.619087   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:18:01.619097   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:18:01.630865   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:18:01.630876   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:18:01.643139   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:18:01.643149   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:18:01.654628   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:18:01.654641   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:18:01.666807   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:18:01.666819   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:18:01.700136   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:18:01.700145   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:18:01.712076   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:18:01.712087   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:18:01.726703   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:18:01.726713   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:18:01.741624   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:18:01.741634   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:18:04.247930   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:05.948346   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:05.948406   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:09.250152   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:09.250304   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:18:09.261491   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:18:09.261570   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:18:09.272268   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:18:09.272334   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:18:09.282664   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:18:09.282737   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:18:09.300299   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:18:09.300365   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:18:09.315795   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:18:09.315870   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:18:09.326528   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:18:09.326591   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:18:09.337888   11681 logs.go:276] 0 containers: []
	W0507 11:18:09.337899   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:18:09.337956   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:18:09.348636   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:18:09.348651   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:18:09.348656   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:18:09.367961   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:18:09.367974   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:18:09.382761   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:18:09.382773   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:18:09.395292   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:18:09.395302   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:18:09.422188   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:18:09.422209   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:18:09.469802   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:18:09.469817   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:18:09.483836   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:18:09.483846   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:18:09.495472   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:18:09.495482   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:18:09.499749   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:18:09.499755   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:18:09.522952   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:18:09.522963   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:18:09.534483   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:18:09.534496   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:18:09.557664   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:18:09.557671   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:18:09.569255   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:18:09.569268   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:18:09.603687   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:18:09.603694   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:18:09.614946   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:18:09.614957   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:18:10.948763   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:10.948785   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:12.128330   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:15.949098   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:15.949141   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:17.130384   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:17.130514   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:18:17.142683   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:18:17.142767   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:18:17.153413   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:18:17.153479   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:18:17.168228   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:18:17.168296   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:18:17.179226   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:18:17.179303   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:18:17.188911   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:18:17.188976   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:18:17.199680   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:18:17.199739   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:18:17.210512   11681 logs.go:276] 0 containers: []
	W0507 11:18:17.210524   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:18:17.210581   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:18:17.221213   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:18:17.221239   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:18:17.221246   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:18:17.236329   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:18:17.236343   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:18:17.255893   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:18:17.255904   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:18:17.267018   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:18:17.267032   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:18:17.290023   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:18:17.290031   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:18:17.294728   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:18:17.294735   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:18:17.308964   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:18:17.308976   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:18:17.320354   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:18:17.320366   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:18:17.354841   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:18:17.354852   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:18:17.369740   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:18:17.369751   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:18:17.381953   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:18:17.381964   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:18:17.394114   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:18:17.394125   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:18:17.428417   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:18:17.428436   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:18:17.440673   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:18:17.440683   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:18:17.455137   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:18:17.455146   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:18:19.969081   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:20.949529   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:20.949569   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:24.971281   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:24.971454   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:18:24.983011   11681 logs.go:276] 1 containers: [3c7abe4bc8ad]
	I0507 11:18:24.983087   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:18:24.994471   11681 logs.go:276] 1 containers: [5cf5eddc4cb3]
	I0507 11:18:24.994544   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:18:25.004977   11681 logs.go:276] 4 containers: [af0339ae18e7 4adaba011b42 db71e48abae6 563be709b2f4]
	I0507 11:18:25.005045   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:18:25.015543   11681 logs.go:276] 1 containers: [8776cd104ce3]
	I0507 11:18:25.015613   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:18:25.025879   11681 logs.go:276] 1 containers: [3c828365fe67]
	I0507 11:18:25.025950   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:18:25.036493   11681 logs.go:276] 1 containers: [7e95954291f9]
	I0507 11:18:25.036558   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:18:25.047120   11681 logs.go:276] 0 containers: []
	W0507 11:18:25.047131   11681 logs.go:278] No container was found matching "kindnet"
	I0507 11:18:25.047188   11681 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:18:25.058555   11681 logs.go:276] 1 containers: [108e9ee704c4]
	I0507 11:18:25.058572   11681 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:18:25.058577   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:18:25.098641   11681 logs.go:123] Gathering logs for kube-apiserver [3c7abe4bc8ad] ...
	I0507 11:18:25.098652   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c7abe4bc8ad"
	I0507 11:18:25.112641   11681 logs.go:123] Gathering logs for etcd [5cf5eddc4cb3] ...
	I0507 11:18:25.112651   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf5eddc4cb3"
	I0507 11:18:25.126475   11681 logs.go:123] Gathering logs for coredns [4adaba011b42] ...
	I0507 11:18:25.126485   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4adaba011b42"
	I0507 11:18:25.137762   11681 logs.go:123] Gathering logs for kube-controller-manager [7e95954291f9] ...
	I0507 11:18:25.137772   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e95954291f9"
	I0507 11:18:25.155559   11681 logs.go:123] Gathering logs for coredns [563be709b2f4] ...
	I0507 11:18:25.155569   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 563be709b2f4"
	I0507 11:18:25.167185   11681 logs.go:123] Gathering logs for container status ...
	I0507 11:18:25.167196   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:18:25.178552   11681 logs.go:123] Gathering logs for Docker ...
	I0507 11:18:25.178561   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:18:25.202943   11681 logs.go:123] Gathering logs for kube-proxy [3c828365fe67] ...
	I0507 11:18:25.202950   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c828365fe67"
	I0507 11:18:25.217270   11681 logs.go:123] Gathering logs for storage-provisioner [108e9ee704c4] ...
	I0507 11:18:25.217283   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 108e9ee704c4"
	I0507 11:18:25.228550   11681 logs.go:123] Gathering logs for kubelet ...
	I0507 11:18:25.228564   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:18:25.262196   11681 logs.go:123] Gathering logs for dmesg ...
	I0507 11:18:25.262208   11681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:18:25.266902   11681 logs.go:123] Gathering logs for coredns [af0339ae18e7] ...
	I0507 11:18:25.266908   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af0339ae18e7"
	I0507 11:18:25.278508   11681 logs.go:123] Gathering logs for coredns [db71e48abae6] ...
	I0507 11:18:25.278522   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db71e48abae6"
	I0507 11:18:25.290132   11681 logs.go:123] Gathering logs for kube-scheduler [8776cd104ce3] ...
	I0507 11:18:25.290146   11681 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8776cd104ce3"
	I0507 11:18:25.950403   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:25.950444   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:27.808619   11681 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:30.951218   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:30.951238   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0507 11:18:31.342424   11892 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0507 11:18:31.347306   11892 out.go:177] * Enabled addons: storage-provisioner
	I0507 11:18:32.810717   11681 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:32.815147   11681 out.go:177] 
	W0507 11:18:32.819175   11681 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0507 11:18:32.819182   11681 out.go:239] * 
	W0507 11:18:32.819616   11681 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:18:32.831118   11681 out.go:177] 
	I0507 11:18:31.355486   11892 addons.go:505] duration metric: took 30.526810334s for enable addons: enabled=[storage-provisioner]
	I0507 11:18:35.952225   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:35.952267   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:40.953695   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:40.953715   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-05-07 18:09:32 UTC, ends at Tue 2024-05-07 18:18:48 UTC. --
	May 07 18:18:33 running-upgrade-776000 dockerd[3216]: time="2024-05-07T18:18:33.864869195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:18:33 running-upgrade-776000 dockerd[3216]: time="2024-05-07T18:18:33.864921815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:18:33 running-upgrade-776000 dockerd[3216]: time="2024-05-07T18:18:33.864927815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:18:33 running-upgrade-776000 dockerd[3216]: time="2024-05-07T18:18:33.865093925Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/fa1a7739f78e24d6b836a73ed951211fa86b500409ae56d16fa7b495b8412a6e pid=18568 runtime=io.containerd.runc.v2
	May 07 18:18:34 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:34Z" level=error msg="ContainerStats resp: {0x40005c1e40 linux}"
	May 07 18:18:35 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:35Z" level=error msg="ContainerStats resp: {0x400080a600 linux}"
	May 07 18:18:35 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:35Z" level=error msg="ContainerStats resp: {0x40007259c0 linux}"
	May 07 18:18:35 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:35Z" level=error msg="ContainerStats resp: {0x4000725ec0 linux}"
	May 07 18:18:35 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:35Z" level=error msg="ContainerStats resp: {0x40000b8040 linux}"
	May 07 18:18:35 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:35Z" level=error msg="ContainerStats resp: {0x4000514700 linux}"
	May 07 18:18:35 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:35Z" level=error msg="ContainerStats resp: {0x40004e4380 linux}"
	May 07 18:18:35 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:35Z" level=error msg="ContainerStats resp: {0x40004e5c40 linux}"
	May 07 18:18:35 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:35Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 07 18:18:40 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:40Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 07 18:18:45 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:45Z" level=error msg="ContainerStats resp: {0x40006fbc00 linux}"
	May 07 18:18:45 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:45Z" level=error msg="ContainerStats resp: {0x4000938580 linux}"
	May 07 18:18:45 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:45Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 07 18:18:46 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:46Z" level=error msg="ContainerStats resp: {0x40000b8e00 linux}"
	May 07 18:18:47 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:47Z" level=error msg="ContainerStats resp: {0x4000724040 linux}"
	May 07 18:18:47 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:47Z" level=error msg="ContainerStats resp: {0x40007241c0 linux}"
	May 07 18:18:47 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:47Z" level=error msg="ContainerStats resp: {0x4000724a00 linux}"
	May 07 18:18:47 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:47Z" level=error msg="ContainerStats resp: {0x4000724c00 linux}"
	May 07 18:18:47 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:47Z" level=error msg="ContainerStats resp: {0x4000725480 linux}"
	May 07 18:18:47 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:47Z" level=error msg="ContainerStats resp: {0x40007258c0 linux}"
	May 07 18:18:47 running-upgrade-776000 cri-dockerd[3057]: time="2024-05-07T18:18:47Z" level=error msg="ContainerStats resp: {0x4000515640 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	fa1a7739f78e2       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   baec89a127ae1
	d49f09f069dc5       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   87c57aa6dee96
	af0339ae18e7e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   baec89a127ae1
	4adaba011b42f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   87c57aa6dee96
	3c828365fe67f       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   55e32ab9a6713
	108e9ee704c4a       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   8848b1d649504
	8776cd104ce35       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   a46d451713a03
	7e95954291f97       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   84a6517c433c4
	3c7abe4bc8ad2       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   b783dae0924c3
	5cf5eddc4cb3b       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   6384b2d025420
	
	
	==> coredns [4adaba011b42] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3559839791298271677.4294163441517328147. HINFO: read udp 10.244.0.2:53398->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3559839791298271677.4294163441517328147. HINFO: read udp 10.244.0.2:52749->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3559839791298271677.4294163441517328147. HINFO: read udp 10.244.0.2:60739->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3559839791298271677.4294163441517328147. HINFO: read udp 10.244.0.2:49647->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3559839791298271677.4294163441517328147. HINFO: read udp 10.244.0.2:47275->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3559839791298271677.4294163441517328147. HINFO: read udp 10.244.0.2:38919->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3559839791298271677.4294163441517328147. HINFO: read udp 10.244.0.2:45416->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3559839791298271677.4294163441517328147. HINFO: read udp 10.244.0.2:38677->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3559839791298271677.4294163441517328147. HINFO: read udp 10.244.0.2:60327->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3559839791298271677.4294163441517328147. HINFO: read udp 10.244.0.2:48140->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [af0339ae18e7] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8682388138262172060.7192012497948494820. HINFO: read udp 10.244.0.3:51889->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8682388138262172060.7192012497948494820. HINFO: read udp 10.244.0.3:42082->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8682388138262172060.7192012497948494820. HINFO: read udp 10.244.0.3:47058->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8682388138262172060.7192012497948494820. HINFO: read udp 10.244.0.3:33728->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8682388138262172060.7192012497948494820. HINFO: read udp 10.244.0.3:45012->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8682388138262172060.7192012497948494820. HINFO: read udp 10.244.0.3:38668->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8682388138262172060.7192012497948494820. HINFO: read udp 10.244.0.3:35360->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8682388138262172060.7192012497948494820. HINFO: read udp 10.244.0.3:47124->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8682388138262172060.7192012497948494820. HINFO: read udp 10.244.0.3:35006->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8682388138262172060.7192012497948494820. HINFO: read udp 10.244.0.3:36264->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d49f09f069dc] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5879827871877293238.4270606874993322012. HINFO: read udp 10.244.0.2:39143->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5879827871877293238.4270606874993322012. HINFO: read udp 10.244.0.2:40299->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5879827871877293238.4270606874993322012. HINFO: read udp 10.244.0.2:40499->10.0.2.3:53: i/o timeout
	
	
	==> coredns [fa1a7739f78e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5352686614235798414.3851808486208409877. HINFO: read udp 10.244.0.3:39682->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5352686614235798414.3851808486208409877. HINFO: read udp 10.244.0.3:41834->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5352686614235798414.3851808486208409877. HINFO: read udp 10.244.0.3:38369->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-776000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-776000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=running-upgrade-776000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_07T11_14_32_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 18:14:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-776000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 18:18:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 18:14:32 +0000   Tue, 07 May 2024 18:14:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 18:14:32 +0000   Tue, 07 May 2024 18:14:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 18:14:32 +0000   Tue, 07 May 2024 18:14:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 18:14:32 +0000   Tue, 07 May 2024 18:14:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-776000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b16172e5c5c471f8a19c036bd0d6b84
	  System UUID:                1b16172e5c5c471f8a19c036bd0d6b84
	  Boot ID:                    c4cc77cb-ccbd-4dac-9ead-6e63365dffd6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2gvc7                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-lbhvt                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-776000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-776000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-776000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-5x6td                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-776000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-776000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-776000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-776000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-776000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-776000 event: Registered Node running-upgrade-776000 in Controller
	
	
	==> dmesg <==
	[  +1.984411] systemd-fstab-generator[874]: Ignoring "noauto" for root device
	[  +0.080533] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.084069] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +1.132643] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091260] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[  +0.077541] systemd-fstab-generator[1056]: Ignoring "noauto" for root device
	[  +2.643476] systemd-fstab-generator[1287]: Ignoring "noauto" for root device
	[ +10.143432] systemd-fstab-generator[1936]: Ignoring "noauto" for root device
	[May 7 18:10] systemd-fstab-generator[2209]: Ignoring "noauto" for root device
	[  +0.139726] systemd-fstab-generator[2248]: Ignoring "noauto" for root device
	[  +0.099332] systemd-fstab-generator[2259]: Ignoring "noauto" for root device
	[  +0.090521] systemd-fstab-generator[2272]: Ignoring "noauto" for root device
	[ +12.521203] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.212208] systemd-fstab-generator[3012]: Ignoring "noauto" for root device
	[  +0.081084] systemd-fstab-generator[3025]: Ignoring "noauto" for root device
	[  +0.077199] systemd-fstab-generator[3036]: Ignoring "noauto" for root device
	[  +0.092159] systemd-fstab-generator[3050]: Ignoring "noauto" for root device
	[  +2.292597] systemd-fstab-generator[3203]: Ignoring "noauto" for root device
	[  +3.190406] systemd-fstab-generator[3590]: Ignoring "noauto" for root device
	[  +0.918047] systemd-fstab-generator[3719]: Ignoring "noauto" for root device
	[ +18.478067] kauditd_printk_skb: 68 callbacks suppressed
	[May 7 18:14] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.328187] systemd-fstab-generator[11605]: Ignoring "noauto" for root device
	[  +5.130572] systemd-fstab-generator[12192]: Ignoring "noauto" for root device
	[  +0.469472] systemd-fstab-generator[12325]: Ignoring "noauto" for root device
	
	
	==> etcd [5cf5eddc4cb3] <==
	{"level":"info","ts":"2024-05-07T18:14:28.137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-05-07T18:14:28.137Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-05-07T18:14:28.162Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-07T18:14:28.162Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-07T18:14:28.162Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-07T18:14:28.162Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-07T18:14:28.162Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-07T18:14:28.508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-07T18:14:28.508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-07T18:14:28.508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-05-07T18:14:28.508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-05-07T18:14:28.508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-07T18:14:28.508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-05-07T18:14:28.508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-07T18:14:28.508Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-07T18:14:28.513Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-07T18:14:28.513Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-07T18:14:28.513Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-07T18:14:28.513Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-776000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-07T18:14:28.513Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-07T18:14:28.514Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-07T18:14:28.517Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-07T18:14:28.518Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-05-07T18:14:28.518Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-07T18:14:28.518Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:18:49 up 9 min,  0 users,  load average: 0.36, 0.35, 0.19
	Linux running-upgrade-776000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3c7abe4bc8ad] <==
	I0507 18:14:29.781332       1 controller.go:611] quota admission added evaluator for: namespaces
	I0507 18:14:29.820385       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0507 18:14:29.820429       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0507 18:14:29.825582       1 cache.go:39] Caches are synced for autoregister controller
	I0507 18:14:29.828244       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0507 18:14:29.828343       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0507 18:14:29.829148       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0507 18:14:30.550227       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0507 18:14:30.729019       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0507 18:14:30.732712       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0507 18:14:30.732750       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0507 18:14:30.868296       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0507 18:14:30.878776       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0507 18:14:30.985299       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0507 18:14:30.987344       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0507 18:14:30.987756       1 controller.go:611] quota admission added evaluator for: endpoints
	I0507 18:14:30.989062       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0507 18:14:31.850978       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0507 18:14:32.174892       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0507 18:14:32.180345       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0507 18:14:32.185425       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0507 18:14:32.230204       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0507 18:14:45.452756       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0507 18:14:45.553298       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0507 18:14:45.952650       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [7e95954291f9] <==
	I0507 18:14:44.900435       1 disruption.go:371] Sending events to api server.
	I0507 18:14:44.900467       1 shared_informer.go:262] Caches are synced for GC
	I0507 18:14:44.900475       1 shared_informer.go:262] Caches are synced for TTL
	I0507 18:14:44.902074       1 shared_informer.go:262] Caches are synced for HPA
	I0507 18:14:44.912617       1 shared_informer.go:262] Caches are synced for resource quota
	I0507 18:14:44.913667       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0507 18:14:44.915820       1 shared_informer.go:262] Caches are synced for attach detach
	I0507 18:14:44.918196       1 shared_informer.go:262] Caches are synced for PVC protection
	I0507 18:14:44.920376       1 shared_informer.go:262] Caches are synced for persistent volume
	I0507 18:14:44.925950       1 shared_informer.go:262] Caches are synced for taint
	I0507 18:14:44.925980       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0507 18:14:44.926001       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-776000. Assuming now as a timestamp.
	I0507 18:14:44.926024       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0507 18:14:44.926125       1 event.go:294] "Event occurred" object="running-upgrade-776000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-776000 event: Registered Node running-upgrade-776000 in Controller"
	I0507 18:14:44.926133       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0507 18:14:44.950221       1 shared_informer.go:262] Caches are synced for stateful set
	I0507 18:14:44.951770       1 shared_informer.go:262] Caches are synced for deployment
	I0507 18:14:44.952046       1 shared_informer.go:262] Caches are synced for endpoint
	I0507 18:14:45.320966       1 shared_informer.go:262] Caches are synced for garbage collector
	I0507 18:14:45.349344       1 shared_informer.go:262] Caches are synced for garbage collector
	I0507 18:14:45.349354       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0507 18:14:45.455970       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5x6td"
	I0507 18:14:45.554321       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0507 18:14:45.704525       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-lbhvt"
	I0507 18:14:45.711073       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2gvc7"
	
	
	==> kube-proxy [3c828365fe67] <==
	I0507 18:14:45.942019       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0507 18:14:45.942043       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0507 18:14:45.942053       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0507 18:14:45.951051       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0507 18:14:45.951064       1 server_others.go:206] "Using iptables Proxier"
	I0507 18:14:45.951075       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0507 18:14:45.951166       1 server.go:661] "Version info" version="v1.24.1"
	I0507 18:14:45.951170       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 18:14:45.951383       1 config.go:317] "Starting service config controller"
	I0507 18:14:45.951389       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0507 18:14:45.951395       1 config.go:226] "Starting endpoint slice config controller"
	I0507 18:14:45.951397       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0507 18:14:45.951626       1 config.go:444] "Starting node config controller"
	I0507 18:14:45.951627       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0507 18:14:46.052402       1 shared_informer.go:262] Caches are synced for node config
	I0507 18:14:46.052424       1 shared_informer.go:262] Caches are synced for service config
	I0507 18:14:46.052436       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8776cd104ce3] <==
	W0507 18:14:29.779388       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0507 18:14:29.779402       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0507 18:14:29.779989       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0507 18:14:29.780027       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0507 18:14:29.780074       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0507 18:14:29.780084       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0507 18:14:29.780112       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0507 18:14:29.780142       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0507 18:14:29.780195       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0507 18:14:29.780225       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0507 18:14:29.780259       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0507 18:14:29.780271       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0507 18:14:29.780341       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0507 18:14:29.780372       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0507 18:14:30.629545       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0507 18:14:30.629573       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0507 18:14:30.683563       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0507 18:14:30.683580       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0507 18:14:30.762417       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0507 18:14:30.762537       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0507 18:14:30.790432       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0507 18:14:30.790583       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0507 18:14:30.809823       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0507 18:14:30.809838       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 18:14:32.681518       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-05-07 18:09:32 UTC, ends at Tue 2024-05-07 18:18:49 UTC. --
	May 07 18:14:33 running-upgrade-776000 kubelet[12198]: I0507 18:14:33.438175   12198 reconciler.go:157] "Reconciler: start to sync state"
	May 07 18:14:33 running-upgrade-776000 kubelet[12198]: E0507 18:14:33.809269   12198 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-776000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-776000"
	May 07 18:14:34 running-upgrade-776000 kubelet[12198]: E0507 18:14:34.010138   12198 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-776000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-776000"
	May 07 18:14:34 running-upgrade-776000 kubelet[12198]: E0507 18:14:34.210330   12198 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-776000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-776000"
	May 07 18:14:34 running-upgrade-776000 kubelet[12198]: I0507 18:14:34.408067   12198 request.go:601] Waited for 1.117356846s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	May 07 18:14:34 running-upgrade-776000 kubelet[12198]: E0507 18:14:34.410416   12198 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-776000\" already exists" pod="kube-system/etcd-running-upgrade-776000"
	May 07 18:14:44 running-upgrade-776000 kubelet[12198]: I0507 18:14:44.907265   12198 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 07 18:14:44 running-upgrade-776000 kubelet[12198]: I0507 18:14:44.907917   12198 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 07 18:14:44 running-upgrade-776000 kubelet[12198]: I0507 18:14:44.931029   12198 topology_manager.go:200] "Topology Admit Handler"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.108799   12198 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/42b5fd22-d7f5-49ee-9f5d-5f1d33c74e8f-tmp\") pod \"storage-provisioner\" (UID: \"42b5fd22-d7f5-49ee-9f5d-5f1d33c74e8f\") " pod="kube-system/storage-provisioner"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.108856   12198 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpc58\" (UniqueName: \"kubernetes.io/projected/42b5fd22-d7f5-49ee-9f5d-5f1d33c74e8f-kube-api-access-kpc58\") pod \"storage-provisioner\" (UID: \"42b5fd22-d7f5-49ee-9f5d-5f1d33c74e8f\") " pod="kube-system/storage-provisioner"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.366393   12198 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8848b1d649504dee42be331ea3b4521f1559acb5f513be59d52e1d19c58c6824"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.458345   12198 topology_manager.go:200] "Topology Admit Handler"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.611021   12198 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a3f153c-6567-4d17-b64b-f670e8f57a07-kube-proxy\") pod \"kube-proxy-5x6td\" (UID: \"3a3f153c-6567-4d17-b64b-f670e8f57a07\") " pod="kube-system/kube-proxy-5x6td"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.611114   12198 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a3f153c-6567-4d17-b64b-f670e8f57a07-lib-modules\") pod \"kube-proxy-5x6td\" (UID: \"3a3f153c-6567-4d17-b64b-f670e8f57a07\") " pod="kube-system/kube-proxy-5x6td"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.611148   12198 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46thd\" (UniqueName: \"kubernetes.io/projected/3a3f153c-6567-4d17-b64b-f670e8f57a07-kube-api-access-46thd\") pod \"kube-proxy-5x6td\" (UID: \"3a3f153c-6567-4d17-b64b-f670e8f57a07\") " pod="kube-system/kube-proxy-5x6td"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.611165   12198 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a3f153c-6567-4d17-b64b-f670e8f57a07-xtables-lock\") pod \"kube-proxy-5x6td\" (UID: \"3a3f153c-6567-4d17-b64b-f670e8f57a07\") " pod="kube-system/kube-proxy-5x6td"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.706899   12198 topology_manager.go:200] "Topology Admit Handler"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.711972   12198 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pglp\" (UniqueName: \"kubernetes.io/projected/40367c7b-cf26-4d25-9b1d-fdea088295c2-kube-api-access-9pglp\") pod \"coredns-6d4b75cb6d-lbhvt\" (UID: \"40367c7b-cf26-4d25-9b1d-fdea088295c2\") " pod="kube-system/coredns-6d4b75cb6d-lbhvt"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.712028   12198 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40367c7b-cf26-4d25-9b1d-fdea088295c2-config-volume\") pod \"coredns-6d4b75cb6d-lbhvt\" (UID: \"40367c7b-cf26-4d25-9b1d-fdea088295c2\") " pod="kube-system/coredns-6d4b75cb6d-lbhvt"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.712232   12198 topology_manager.go:200] "Topology Admit Handler"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.812143   12198 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbf79550-cb14-455a-ade0-40cf31890a57-config-volume\") pod \"coredns-6d4b75cb6d-2gvc7\" (UID: \"cbf79550-cb14-455a-ade0-40cf31890a57\") " pod="kube-system/coredns-6d4b75cb6d-2gvc7"
	May 07 18:14:45 running-upgrade-776000 kubelet[12198]: I0507 18:14:45.812172   12198 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grpvf\" (UniqueName: \"kubernetes.io/projected/cbf79550-cb14-455a-ade0-40cf31890a57-kube-api-access-grpvf\") pod \"coredns-6d4b75cb6d-2gvc7\" (UID: \"cbf79550-cb14-455a-ade0-40cf31890a57\") " pod="kube-system/coredns-6d4b75cb6d-2gvc7"
	May 07 18:18:34 running-upgrade-776000 kubelet[12198]: I0507 18:18:34.651026   12198 scope.go:110] "RemoveContainer" containerID="563be709b2f4fa0eb30734615305801263f5ba745bbfc5bccbd88dd6f8e48e53"
	May 07 18:18:34 running-upgrade-776000 kubelet[12198]: I0507 18:18:34.665355   12198 scope.go:110] "RemoveContainer" containerID="db71e48abae6987ece86f78473fc26aa625663ecd7232ceb70ebdc6db3a9f91e"
	
	
	==> storage-provisioner [108e9ee704c4] <==
	I0507 18:14:45.428875       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0507 18:14:45.432896       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0507 18:14:45.432910       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0507 18:14:45.436008       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0507 18:14:45.436195       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34d265ae-e1c1-4fe5-9a23-6a5b12241d60", APIVersion:"v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-776000_49b76739-2263-47df-8d6b-67c3d1c9ce4f became leader
	I0507 18:14:45.436228       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-776000_49b76739-2263-47df-8d6b-67c3d1c9ce4f!
	I0507 18:14:45.537309       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-776000_49b76739-2263-47df-8d6b-67c3d1c9ce4f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-776000 -n running-upgrade-776000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-776000 -n running-upgrade-776000: exit status 2 (15.62543975s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-776000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-776000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-776000
--- FAIL: TestRunningBinaryUpgrade (598.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.997306s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-133000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-133000" primary control-plane node in "kubernetes-upgrade-133000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-133000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:12:07.105352   11794 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:12:07.105470   11794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:12:07.105474   11794 out.go:304] Setting ErrFile to fd 2...
	I0507 11:12:07.105478   11794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:12:07.105611   11794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:12:07.106708   11794 out.go:298] Setting JSON to false
	I0507 11:12:07.123056   11794 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6098,"bootTime":1715099429,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:12:07.123110   11794 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:12:07.127458   11794 out.go:177] * [kubernetes-upgrade-133000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:12:07.131214   11794 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:12:07.131261   11794 notify.go:220] Checking for updates...
	I0507 11:12:07.140326   11794 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:12:07.143293   11794 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:12:07.146266   11794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:12:07.149312   11794 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:12:07.150774   11794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:12:07.154644   11794 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:12:07.154712   11794 config.go:182] Loaded profile config "running-upgrade-776000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:12:07.154760   11794 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:12:07.159292   11794 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:12:07.164308   11794 start.go:297] selected driver: qemu2
	I0507 11:12:07.164316   11794 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:12:07.164323   11794 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:12:07.166547   11794 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:12:07.169266   11794 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:12:07.172398   11794 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0507 11:12:07.172411   11794 cni.go:84] Creating CNI manager for ""
	I0507 11:12:07.172421   11794 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0507 11:12:07.172448   11794 start.go:340] cluster config:
	{Name:kubernetes-upgrade-133000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-133000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:12:07.176850   11794 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:12:07.184339   11794 out.go:177] * Starting "kubernetes-upgrade-133000" primary control-plane node in "kubernetes-upgrade-133000" cluster
	I0507 11:12:07.188263   11794 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0507 11:12:07.188281   11794 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0507 11:12:07.188287   11794 cache.go:56] Caching tarball of preloaded images
	I0507 11:12:07.188346   11794 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:12:07.188355   11794 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0507 11:12:07.188406   11794 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/kubernetes-upgrade-133000/config.json ...
	I0507 11:12:07.188417   11794 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/kubernetes-upgrade-133000/config.json: {Name:mk4f1ea848e209bcc376851d1ba9be99f4ea3db8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:12:07.188801   11794 start.go:360] acquireMachinesLock for kubernetes-upgrade-133000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:12:07.188836   11794 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "kubernetes-upgrade-133000"
	I0507 11:12:07.188847   11794 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-133000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-133000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:12:07.188872   11794 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:12:07.192359   11794 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:12:07.217930   11794 start.go:159] libmachine.API.Create for "kubernetes-upgrade-133000" (driver="qemu2")
	I0507 11:12:07.217956   11794 client.go:168] LocalClient.Create starting
	I0507 11:12:07.218022   11794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:12:07.218054   11794 main.go:141] libmachine: Decoding PEM data...
	I0507 11:12:07.218065   11794 main.go:141] libmachine: Parsing certificate...
	I0507 11:12:07.218109   11794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:12:07.218135   11794 main.go:141] libmachine: Decoding PEM data...
	I0507 11:12:07.218147   11794 main.go:141] libmachine: Parsing certificate...
	I0507 11:12:07.218574   11794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:12:07.365602   11794 main.go:141] libmachine: Creating SSH key...
	I0507 11:12:07.492927   11794 main.go:141] libmachine: Creating Disk image...
	I0507 11:12:07.492934   11794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:12:07.493119   11794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2
	I0507 11:12:07.506525   11794 main.go:141] libmachine: STDOUT: 
	I0507 11:12:07.506555   11794 main.go:141] libmachine: STDERR: 
	I0507 11:12:07.506616   11794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2 +20000M
	I0507 11:12:07.517819   11794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:12:07.517840   11794 main.go:141] libmachine: STDERR: 
	I0507 11:12:07.517855   11794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2
	I0507 11:12:07.517859   11794 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:12:07.517894   11794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:aa:1b:b7:aa:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2
	I0507 11:12:07.519592   11794 main.go:141] libmachine: STDOUT: 
	I0507 11:12:07.519608   11794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:12:07.519631   11794 client.go:171] duration metric: took 301.680334ms to LocalClient.Create
	I0507 11:12:09.521802   11794 start.go:128] duration metric: took 2.332990375s to createHost
	I0507 11:12:09.521881   11794 start.go:83] releasing machines lock for "kubernetes-upgrade-133000", held for 2.333119708s
	W0507 11:12:09.521931   11794 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:12:09.527452   11794 out.go:177] * Deleting "kubernetes-upgrade-133000" in qemu2 ...
	W0507 11:12:09.548516   11794 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:12:09.548537   11794 start.go:728] Will try again in 5 seconds ...
	I0507 11:12:14.550195   11794 start.go:360] acquireMachinesLock for kubernetes-upgrade-133000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:12:14.550870   11794 start.go:364] duration metric: took 519.041µs to acquireMachinesLock for "kubernetes-upgrade-133000"
	I0507 11:12:14.550943   11794 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-133000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-133000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:12:14.551181   11794 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:12:14.556912   11794 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:12:14.608080   11794 start.go:159] libmachine.API.Create for "kubernetes-upgrade-133000" (driver="qemu2")
	I0507 11:12:14.608135   11794 client.go:168] LocalClient.Create starting
	I0507 11:12:14.608254   11794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:12:14.608319   11794 main.go:141] libmachine: Decoding PEM data...
	I0507 11:12:14.608338   11794 main.go:141] libmachine: Parsing certificate...
	I0507 11:12:14.608403   11794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:12:14.608446   11794 main.go:141] libmachine: Decoding PEM data...
	I0507 11:12:14.608457   11794 main.go:141] libmachine: Parsing certificate...
	I0507 11:12:14.609018   11794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:12:14.758623   11794 main.go:141] libmachine: Creating SSH key...
	I0507 11:12:15.009760   11794 main.go:141] libmachine: Creating Disk image...
	I0507 11:12:15.009771   11794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:12:15.009979   11794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2
	I0507 11:12:15.022929   11794 main.go:141] libmachine: STDOUT: 
	I0507 11:12:15.022950   11794 main.go:141] libmachine: STDERR: 
	I0507 11:12:15.023010   11794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2 +20000M
	I0507 11:12:15.034369   11794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:12:15.034385   11794 main.go:141] libmachine: STDERR: 
	I0507 11:12:15.034399   11794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2
	I0507 11:12:15.034405   11794 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:12:15.034446   11794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:86:b1:98:59:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2
	I0507 11:12:15.036210   11794 main.go:141] libmachine: STDOUT: 
	I0507 11:12:15.036224   11794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:12:15.036250   11794 client.go:171] duration metric: took 428.124ms to LocalClient.Create
	I0507 11:12:17.038374   11794 start.go:128] duration metric: took 2.487241208s to createHost
	I0507 11:12:17.038446   11794 start.go:83] releasing machines lock for "kubernetes-upgrade-133000", held for 2.487636333s
	W0507 11:12:17.038789   11794 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-133000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-133000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:12:17.046428   11794 out.go:177] 
	W0507 11:12:17.051453   11794 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:12:17.051480   11794 out.go:239] * 
	* 
	W0507 11:12:17.053023   11794 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:12:17.062431   11794 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-133000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-133000: (2.857021166s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-133000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-133000 status --format={{.Host}}: exit status 7 (52.600333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.170177292s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-133000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-133000" primary control-plane node in "kubernetes-upgrade-133000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-133000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-133000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:12:20.016091   11830 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:12:20.016257   11830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:12:20.016260   11830 out.go:304] Setting ErrFile to fd 2...
	I0507 11:12:20.016262   11830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:12:20.016396   11830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:12:20.017459   11830 out.go:298] Setting JSON to false
	I0507 11:12:20.033718   11830 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6111,"bootTime":1715099429,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:12:20.033806   11830 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:12:20.038482   11830 out.go:177] * [kubernetes-upgrade-133000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:12:20.046315   11830 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:12:20.050427   11830 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:12:20.046357   11830 notify.go:220] Checking for updates...
	I0507 11:12:20.053404   11830 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:12:20.054945   11830 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:12:20.058417   11830 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:12:20.061392   11830 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:12:20.064689   11830 config.go:182] Loaded profile config "kubernetes-upgrade-133000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0507 11:12:20.064935   11830 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:12:20.069460   11830 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 11:12:20.076405   11830 start.go:297] selected driver: qemu2
	I0507 11:12:20.076411   11830 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-133000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-133000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:12:20.076458   11830 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:12:20.078688   11830 cni.go:84] Creating CNI manager for ""
	I0507 11:12:20.078705   11830 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:12:20.078744   11830 start.go:340] cluster config:
	{Name:kubernetes-upgrade-133000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-133000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:12:20.083044   11830 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:12:20.090396   11830 out.go:177] * Starting "kubernetes-upgrade-133000" primary control-plane node in "kubernetes-upgrade-133000" cluster
	I0507 11:12:20.094377   11830 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:12:20.094401   11830 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:12:20.094409   11830 cache.go:56] Caching tarball of preloaded images
	I0507 11:12:20.094467   11830 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:12:20.094472   11830 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:12:20.094529   11830 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/kubernetes-upgrade-133000/config.json ...
	I0507 11:12:20.094924   11830 start.go:360] acquireMachinesLock for kubernetes-upgrade-133000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:12:20.094951   11830 start.go:364] duration metric: took 21.917µs to acquireMachinesLock for "kubernetes-upgrade-133000"
	I0507 11:12:20.094961   11830 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:12:20.094966   11830 fix.go:54] fixHost starting: 
	I0507 11:12:20.095089   11830 fix.go:112] recreateIfNeeded on kubernetes-upgrade-133000: state=Stopped err=<nil>
	W0507 11:12:20.095097   11830 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:12:20.099448   11830 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-133000" ...
	I0507 11:12:20.103426   11830 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:86:b1:98:59:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2
	I0507 11:12:20.105433   11830 main.go:141] libmachine: STDOUT: 
	I0507 11:12:20.105458   11830 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:12:20.105487   11830 fix.go:56] duration metric: took 10.520792ms for fixHost
	I0507 11:12:20.105492   11830 start.go:83] releasing machines lock for "kubernetes-upgrade-133000", held for 10.536833ms
	W0507 11:12:20.105503   11830 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:12:20.105540   11830 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:12:20.105549   11830 start.go:728] Will try again in 5 seconds ...
	I0507 11:12:25.107430   11830 start.go:360] acquireMachinesLock for kubernetes-upgrade-133000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:12:25.107620   11830 start.go:364] duration metric: took 164.916µs to acquireMachinesLock for "kubernetes-upgrade-133000"
	I0507 11:12:25.107676   11830 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:12:25.107683   11830 fix.go:54] fixHost starting: 
	I0507 11:12:25.107902   11830 fix.go:112] recreateIfNeeded on kubernetes-upgrade-133000: state=Stopped err=<nil>
	W0507 11:12:25.107917   11830 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:12:25.118276   11830 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-133000" ...
	I0507 11:12:25.122134   11830 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:86:b1:98:59:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubernetes-upgrade-133000/disk.qcow2
	I0507 11:12:25.125234   11830 main.go:141] libmachine: STDOUT: 
	I0507 11:12:25.125261   11830 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:12:25.125288   11830 fix.go:56] duration metric: took 17.606666ms for fixHost
	I0507 11:12:25.125293   11830 start.go:83] releasing machines lock for "kubernetes-upgrade-133000", held for 17.66675ms
	W0507 11:12:25.125355   11830 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-133000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-133000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:12:25.133970   11830 out.go:177] 
	W0507 11:12:25.137086   11830 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:12:25.137100   11830 out.go:239] * 
	* 
	W0507 11:12:25.137792   11830 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:12:25.148129   11830 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-133000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-133000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-133000 version --output=json: exit status 1 (32.248583ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-133000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-05-07 11:12:25.190466 -0700 PDT m=+925.452457585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-133000 -n kubernetes-upgrade-133000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-133000 -n kubernetes-upgrade-133000: exit status 7 (29.629875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-133000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-133000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-133000
--- FAIL: TestKubernetesUpgrade (18.24s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.02s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0 on darwin (arm64)
- MINIKUBE_LOCATION=18804
- KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current112697774/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.02s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.96s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0 on darwin (arm64)
- MINIKUBE_LOCATION=18804
- KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1593250042/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (579.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3431284473 start -p stopped-upgrade-069000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3431284473 start -p stopped-upgrade-069000 --memory=2200 --vm-driver=qemu2 : (41.138123291s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3431284473 -p stopped-upgrade-069000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3431284473 -p stopped-upgrade-069000 stop: (12.094228958s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-069000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-069000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m45.8595315s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-069000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-069000" primary control-plane node in "stopped-upgrade-069000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-069000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:13:19.642023   11892 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:13:19.642168   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:13:19.642172   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:13:19.642174   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:13:19.642308   11892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:13:19.643392   11892 out.go:298] Setting JSON to false
	I0507 11:13:19.660993   11892 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6170,"bootTime":1715099429,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:13:19.661053   11892 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:13:19.665835   11892 out.go:177] * [stopped-upgrade-069000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:13:19.673851   11892 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:13:19.675356   11892 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:13:19.673955   11892 notify.go:220] Checking for updates...
	I0507 11:13:19.680782   11892 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:13:19.683844   11892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:13:19.686733   11892 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:13:19.689802   11892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:13:19.693106   11892 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:13:19.696740   11892 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0507 11:13:19.699786   11892 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:13:19.703793   11892 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 11:13:19.710754   11892 start.go:297] selected driver: qemu2
	I0507 11:13:19.710761   11892 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-069000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0507 11:13:19.710809   11892 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:13:19.713235   11892 cni.go:84] Creating CNI manager for ""
	I0507 11:13:19.713255   11892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:13:19.713280   11892 start.go:340] cluster config:
	{Name:stopped-upgrade-069000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0507 11:13:19.713325   11892 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:13:19.718748   11892 out.go:177] * Starting "stopped-upgrade-069000" primary control-plane node in "stopped-upgrade-069000" cluster
	I0507 11:13:19.722843   11892 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0507 11:13:19.722860   11892 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0507 11:13:19.722868   11892 cache.go:56] Caching tarball of preloaded images
	I0507 11:13:19.722930   11892 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:13:19.722935   11892 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0507 11:13:19.722990   11892 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/config.json ...
	I0507 11:13:19.723478   11892 start.go:360] acquireMachinesLock for stopped-upgrade-069000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:13:19.723508   11892 start.go:364] duration metric: took 24.667µs to acquireMachinesLock for "stopped-upgrade-069000"
	I0507 11:13:19.723517   11892 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:13:19.723523   11892 fix.go:54] fixHost starting: 
	I0507 11:13:19.723621   11892 fix.go:112] recreateIfNeeded on stopped-upgrade-069000: state=Stopped err=<nil>
	W0507 11:13:19.723629   11892 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:13:19.742351   11892 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-069000" ...
	I0507 11:13:19.746927   11892 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51437-:22,hostfwd=tcp::51438-:2376,hostname=stopped-upgrade-069000 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/disk.qcow2
	I0507 11:13:19.792677   11892 main.go:141] libmachine: STDOUT: 
	I0507 11:13:19.792718   11892 main.go:141] libmachine: STDERR: 
	I0507 11:13:19.792724   11892 main.go:141] libmachine: Waiting for VM to start (ssh -p 51437 docker@127.0.0.1)...
	I0507 11:13:39.944802   11892 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/config.json ...
	I0507 11:13:39.945546   11892 machine.go:94] provisionDockerMachine start ...
	I0507 11:13:39.945729   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:39.946154   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:39.946167   11892 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 11:13:40.042522   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 11:13:40.042557   11892 buildroot.go:166] provisioning hostname "stopped-upgrade-069000"
	I0507 11:13:40.042647   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:40.042855   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:40.042867   11892 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-069000 && echo "stopped-upgrade-069000" | sudo tee /etc/hostname
	I0507 11:13:40.121753   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-069000
	
	I0507 11:13:40.121816   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:40.121946   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:40.121956   11892 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-069000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-069000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-069000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 11:13:40.195290   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 11:13:40.195302   11892 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18804-8175/.minikube CaCertPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18804-8175/.minikube}
	I0507 11:13:40.195313   11892 buildroot.go:174] setting up certificates
	I0507 11:13:40.195319   11892 provision.go:84] configureAuth start
	I0507 11:13:40.195328   11892 provision.go:143] copyHostCerts
	I0507 11:13:40.195394   11892 exec_runner.go:144] found /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.pem, removing ...
	I0507 11:13:40.195404   11892 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.pem
	I0507 11:13:40.195533   11892 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.pem (1078 bytes)
	I0507 11:13:40.195725   11892 exec_runner.go:144] found /Users/jenkins/minikube-integration/18804-8175/.minikube/cert.pem, removing ...
	I0507 11:13:40.195728   11892 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18804-8175/.minikube/cert.pem
	I0507 11:13:40.195777   11892 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18804-8175/.minikube/cert.pem (1123 bytes)
	I0507 11:13:40.195913   11892 exec_runner.go:144] found /Users/jenkins/minikube-integration/18804-8175/.minikube/key.pem, removing ...
	I0507 11:13:40.195916   11892 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18804-8175/.minikube/key.pem
	I0507 11:13:40.195959   11892 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18804-8175/.minikube/key.pem (1675 bytes)
	I0507 11:13:40.196048   11892 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-069000 san=[127.0.0.1 localhost minikube stopped-upgrade-069000]
	I0507 11:13:40.251626   11892 provision.go:177] copyRemoteCerts
	I0507 11:13:40.251656   11892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 11:13:40.251662   11892 sshutil.go:53] new ssh client: &{IP:localhost Port:51437 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/id_rsa Username:docker}
	I0507 11:13:40.290560   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0507 11:13:40.298251   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0507 11:13:40.305218   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0507 11:13:40.311762   11892 provision.go:87] duration metric: took 116.302333ms to configureAuth
	I0507 11:13:40.311772   11892 buildroot.go:189] setting minikube options for container-runtime
	I0507 11:13:40.311882   11892 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:13:40.311917   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:40.312008   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:40.312013   11892 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 11:13:40.382240   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 11:13:40.382250   11892 buildroot.go:70] root file system type: tmpfs
	I0507 11:13:40.382314   11892 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 11:13:40.382389   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:40.382514   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:40.382550   11892 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 11:13:40.457303   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 11:13:40.457365   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:40.457491   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:40.457500   11892 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 11:13:40.840104   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 11:13:40.840118   11892 machine.go:97] duration metric: took 893.556542ms to provisionDockerMachine
	I0507 11:13:40.840125   11892 start.go:293] postStartSetup for "stopped-upgrade-069000" (driver="qemu2")
	I0507 11:13:40.840132   11892 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 11:13:40.840211   11892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 11:13:40.840222   11892 sshutil.go:53] new ssh client: &{IP:localhost Port:51437 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/id_rsa Username:docker}
	I0507 11:13:40.878018   11892 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 11:13:40.879147   11892 info.go:137] Remote host: Buildroot 2021.02.12
	I0507 11:13:40.879155   11892 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18804-8175/.minikube/addons for local assets ...
	I0507 11:13:40.879231   11892 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18804-8175/.minikube/files for local assets ...
	I0507 11:13:40.879324   11892 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18804-8175/.minikube/files/etc/ssl/certs/94222.pem -> 94222.pem in /etc/ssl/certs
	I0507 11:13:40.879412   11892 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 11:13:40.881874   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/files/etc/ssl/certs/94222.pem --> /etc/ssl/certs/94222.pem (1708 bytes)
	I0507 11:13:40.888466   11892 start.go:296] duration metric: took 48.283166ms for postStartSetup
	I0507 11:13:40.888479   11892 fix.go:56] duration metric: took 21.152670541s for fixHost
	I0507 11:13:40.888510   11892 main.go:141] libmachine: Using SSH client type: native
	I0507 11:13:40.888615   11892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f85c80] 0x100f884e0 <nil>  [] 0s} localhost 51437 <nil> <nil>}
	I0507 11:13:40.888619   11892 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0507 11:13:40.958702   11892 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715105621.134925421
	
	I0507 11:13:40.958711   11892 fix.go:216] guest clock: 1715105621.134925421
	I0507 11:13:40.958715   11892 fix.go:229] Guest: 2024-05-07 11:13:41.134925421 -0700 PDT Remote: 2024-05-07 11:13:40.88848 -0700 PDT m=+21.257240584 (delta=246.445421ms)
	I0507 11:13:40.958731   11892 fix.go:200] guest clock delta is within tolerance: 246.445421ms
	I0507 11:13:40.958734   11892 start.go:83] releasing machines lock for "stopped-upgrade-069000", held for 21.222858042s
	I0507 11:13:40.958805   11892 ssh_runner.go:195] Run: cat /version.json
	I0507 11:13:40.958816   11892 sshutil.go:53] new ssh client: &{IP:localhost Port:51437 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/id_rsa Username:docker}
	I0507 11:13:40.958823   11892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 11:13:40.958850   11892 sshutil.go:53] new ssh client: &{IP:localhost Port:51437 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/id_rsa Username:docker}
	W0507 11:13:40.959462   11892 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51437: connect: connection refused
	I0507 11:13:40.959491   11892 retry.go:31] will retry after 213.71735ms: dial tcp [::1]:51437: connect: connection refused
	W0507 11:13:41.220056   11892 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0507 11:13:41.220246   11892 ssh_runner.go:195] Run: systemctl --version
	I0507 11:13:41.224222   11892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0507 11:13:41.227474   11892 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 11:13:41.227524   11892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0507 11:13:41.232860   11892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0507 11:13:41.240289   11892 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 11:13:41.240300   11892 start.go:494] detecting cgroup driver to use...
	I0507 11:13:41.240399   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 11:13:41.249905   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0507 11:13:41.253444   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 11:13:41.256872   11892 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 11:13:41.256895   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 11:13:41.260321   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 11:13:41.263793   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 11:13:41.266918   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 11:13:41.269606   11892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 11:13:41.272513   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 11:13:41.275906   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 11:13:41.279140   11892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 11:13:41.281919   11892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 11:13:41.284655   11892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 11:13:41.287620   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:13:41.356814   11892 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 11:13:41.367875   11892 start.go:494] detecting cgroup driver to use...
	I0507 11:13:41.367955   11892 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 11:13:41.372509   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 11:13:41.377015   11892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 11:13:41.384088   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 11:13:41.388723   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 11:13:41.393442   11892 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 11:13:41.433114   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 11:13:41.438203   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 11:13:41.443421   11892 ssh_runner.go:195] Run: which cri-dockerd
	I0507 11:13:41.444601   11892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 11:13:41.447503   11892 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 11:13:41.452857   11892 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 11:13:41.530763   11892 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 11:13:41.606235   11892 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 11:13:41.606299   11892 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 11:13:41.611537   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:13:41.696054   11892 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 11:13:42.844063   11892 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.146855417s)
	I0507 11:13:42.844119   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 11:13:42.848745   11892 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0507 11:13:42.855873   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 11:13:42.860510   11892 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 11:13:42.940200   11892 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 11:13:43.016094   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:13:43.092860   11892 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 11:13:43.098759   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 11:13:43.103812   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:13:43.162376   11892 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 11:13:43.208312   11892 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 11:13:43.208413   11892 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 11:13:43.210340   11892 start.go:562] Will wait 60s for crictl version
	I0507 11:13:43.210374   11892 ssh_runner.go:195] Run: which crictl
	I0507 11:13:43.211729   11892 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 11:13:43.227092   11892 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0507 11:13:43.227165   11892 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 11:13:43.244824   11892 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 11:13:43.264923   11892 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0507 11:13:43.264995   11892 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0507 11:13:43.266334   11892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 11:13:43.270554   11892 kubeadm.go:877] updating cluster {Name:stopped-upgrade-069000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0507 11:13:43.270604   11892 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0507 11:13:43.270645   11892 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 11:13:43.281236   11892 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0507 11:13:43.281245   11892 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0507 11:13:43.281292   11892 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 11:13:43.284438   11892 ssh_runner.go:195] Run: which lz4
	I0507 11:13:43.285672   11892 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0507 11:13:43.286987   11892 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0507 11:13:43.287006   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0507 11:13:44.003239   11892 docker.go:649] duration metric: took 716.960417ms to copy over tarball
	I0507 11:13:44.003300   11892 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0507 11:13:45.168535   11892 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.164225875s)
	I0507 11:13:45.168550   11892 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0507 11:13:45.184828   11892 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 11:13:45.188152   11892 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0507 11:13:45.193280   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:13:45.281973   11892 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 11:13:46.821282   11892 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.538112542s)
	I0507 11:13:46.821391   11892 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 11:13:46.833016   11892 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0507 11:13:46.833026   11892 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0507 11:13:46.833044   11892 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0507 11:13:46.839760   11892 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:13:46.839780   11892 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:13:46.839847   11892 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:13:46.839866   11892 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:13:46.839904   11892 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0507 11:13:46.839915   11892 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0507 11:13:46.839979   11892 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0507 11:13:46.840377   11892 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:13:46.847096   11892 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0507 11:13:46.847262   11892 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:13:46.847965   11892 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0507 11:13:46.848114   11892 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:13:46.847979   11892 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:13:46.848002   11892 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:13:46.848137   11892 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0507 11:13:46.847968   11892 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W0507 11:13:47.639203   11892 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0507 11:13:47.639500   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:13:47.658595   11892 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0507 11:13:47.658631   11892 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:13:47.658708   11892 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:13:47.676777   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0507 11:13:47.676921   11892 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0507 11:13:47.678719   11892 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0507 11:13:47.678736   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0507 11:13:47.705619   11892 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0507 11:13:47.705635   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0507 11:13:47.838857   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:13:47.874122   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0507 11:13:47.928106   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:13:47.947638   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0507 11:13:47.960157   11892 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0507 11:13:47.960199   11892 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0507 11:13:47.960208   11892 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0507 11:13:47.960215   11892 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:13:47.960217   11892 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0507 11:13:47.960271   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0507 11:13:47.960281   11892 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0507 11:13:47.960271   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0507 11:13:47.960291   11892 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:13:47.960314   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0507 11:13:47.962805   11892 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0507 11:13:47.962819   11892 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0507 11:13:47.962862   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0507 11:13:47.987867   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0507 11:13:47.987976   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0507 11:13:47.987989   11892 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0507 11:13:47.992811   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0507 11:13:47.995130   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0507 11:13:47.995139   11892 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0507 11:13:47.995149   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0507 11:13:47.995232   11892 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0507 11:13:47.996938   11892 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0507 11:13:47.996961   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0507 11:13:48.013809   11892 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0507 11:13:48.013823   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0507 11:13:48.060279   11892 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0507 11:13:48.072762   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:13:48.087425   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0507 11:13:48.097631   11892 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0507 11:13:48.097759   11892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:13:48.100833   11892 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0507 11:13:48.100857   11892 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:13:48.100904   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0507 11:13:48.120917   11892 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0507 11:13:48.120939   11892 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0507 11:13:48.121000   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0507 11:13:48.125444   11892 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0507 11:13:48.125468   11892 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:13:48.125524   11892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0507 11:13:48.138751   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0507 11:13:48.173763   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0507 11:13:48.173851   11892 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0507 11:13:48.173962   11892 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0507 11:13:48.184724   11892 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0507 11:13:48.184757   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0507 11:13:48.264581   11892 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0507 11:13:48.264596   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0507 11:13:48.400521   11892 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0507 11:13:48.400550   11892 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0507 11:13:48.400560   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0507 11:13:48.437574   11892 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0507 11:13:48.437613   11892 cache_images.go:92] duration metric: took 1.60345075s to LoadCachedImages
	W0507 11:13:48.437674   11892 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0507 11:13:48.437681   11892 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0507 11:13:48.437739   11892 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-069000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 11:13:48.437801   11892 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0507 11:13:48.451736   11892 cni.go:84] Creating CNI manager for ""
	I0507 11:13:48.451749   11892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:13:48.451753   11892 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0507 11:13:48.451763   11892 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-069000 NodeName:stopped-upgrade-069000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0507 11:13:48.451836   11892 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-069000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0507 11:13:48.451889   11892 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0507 11:13:48.454820   11892 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 11:13:48.454843   11892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0507 11:13:48.457953   11892 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0507 11:13:48.462957   11892 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 11:13:48.468160   11892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0507 11:13:48.473578   11892 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0507 11:13:48.474812   11892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 11:13:48.478665   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 11:13:48.550329   11892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 11:13:48.556923   11892 certs.go:68] Setting up /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000 for IP: 10.0.2.15
	I0507 11:13:48.556932   11892 certs.go:194] generating shared ca certs ...
	I0507 11:13:48.556941   11892 certs.go:226] acquiring lock for ca certs: {Name:mk0fe80b930eecdc420c4c0ef01e5eae3fea7733 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:13:48.557106   11892 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.key
	I0507 11:13:48.557146   11892 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/proxy-client-ca.key
	I0507 11:13:48.557151   11892 certs.go:256] generating profile certs ...
	I0507 11:13:48.557214   11892 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/client.key
	I0507 11:13:48.557235   11892 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.key.a96e11d5
	I0507 11:13:48.557248   11892 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.crt.a96e11d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0507 11:13:48.718420   11892 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.crt.a96e11d5 ...
	I0507 11:13:48.718436   11892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.crt.a96e11d5: {Name:mk8136986f918f33932b70467945a54e6f814a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:13:48.718756   11892 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.key.a96e11d5 ...
	I0507 11:13:48.718761   11892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.key.a96e11d5: {Name:mk33d042cf0514914cf7108135301e8f542454ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:13:48.718885   11892 certs.go:381] copying /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.crt.a96e11d5 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.crt
	I0507 11:13:48.719044   11892 certs.go:385] copying /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.key.a96e11d5 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.key
	I0507 11:13:48.719189   11892 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/proxy-client.key
	I0507 11:13:48.719326   11892 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/9422.pem (1338 bytes)
	W0507 11:13:48.719356   11892 certs.go:480] ignoring /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/9422_empty.pem, impossibly tiny 0 bytes
	I0507 11:13:48.719362   11892 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca-key.pem (1679 bytes)
	I0507 11:13:48.719381   11892 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem (1078 bytes)
	I0507 11:13:48.719405   11892 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem (1123 bytes)
	I0507 11:13:48.719425   11892 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/key.pem (1675 bytes)
	I0507 11:13:48.719463   11892 certs.go:484] found cert: /Users/jenkins/minikube-integration/18804-8175/.minikube/files/etc/ssl/certs/94222.pem (1708 bytes)
	I0507 11:13:48.719809   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 11:13:48.726830   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 11:13:48.734526   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 11:13:48.742064   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0507 11:13:48.749362   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0507 11:13:48.756997   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0507 11:13:48.763443   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 11:13:48.770542   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0507 11:13:48.777717   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 11:13:48.784513   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/9422.pem --> /usr/share/ca-certificates/9422.pem (1338 bytes)
	I0507 11:13:48.791148   11892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18804-8175/.minikube/files/etc/ssl/certs/94222.pem --> /usr/share/ca-certificates/94222.pem (1708 bytes)
	I0507 11:13:48.798033   11892 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0507 11:13:48.803292   11892 ssh_runner.go:195] Run: openssl version
	I0507 11:13:48.805239   11892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94222.pem && ln -fs /usr/share/ca-certificates/94222.pem /etc/ssl/certs/94222.pem"
	I0507 11:13:48.808210   11892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94222.pem
	I0507 11:13:48.809495   11892 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 17:57 /usr/share/ca-certificates/94222.pem
	I0507 11:13:48.809511   11892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94222.pem
	I0507 11:13:48.811209   11892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94222.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 11:13:48.814318   11892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 11:13:48.817590   11892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 11:13:48.819027   11892 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:09 /usr/share/ca-certificates/minikubeCA.pem
	I0507 11:13:48.819046   11892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 11:13:48.820947   11892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 11:13:48.823696   11892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9422.pem && ln -fs /usr/share/ca-certificates/9422.pem /etc/ssl/certs/9422.pem"
	I0507 11:13:48.826847   11892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9422.pem
	I0507 11:13:48.828488   11892 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 17:57 /usr/share/ca-certificates/9422.pem
	I0507 11:13:48.828512   11892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9422.pem
	I0507 11:13:48.830236   11892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9422.pem /etc/ssl/certs/51391683.0"
	I0507 11:13:48.833319   11892 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 11:13:48.834709   11892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0507 11:13:48.838007   11892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0507 11:13:48.840026   11892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0507 11:13:48.841965   11892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0507 11:13:48.843844   11892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0507 11:13:48.845688   11892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0507 11:13:48.848174   11892 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-069000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0507 11:13:48.848242   11892 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 11:13:48.858217   11892 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0507 11:13:48.861362   11892 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0507 11:13:48.861368   11892 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0507 11:13:48.861371   11892 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0507 11:13:48.861392   11892 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0507 11:13:48.864084   11892 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0507 11:13:48.864362   11892 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-069000" does not appear in /Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:13:48.864465   11892 kubeconfig.go:62] /Users/jenkins/minikube-integration/18804-8175/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-069000" cluster setting kubeconfig missing "stopped-upgrade-069000" context setting]
	I0507 11:13:48.864654   11892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/kubeconfig: {Name:mk2a7794036857fd378216b160722b418b125ac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:13:48.865116   11892 kapi.go:59] client config for stopped-upgrade-069000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/client.key", CAFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102317d80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 11:13:48.865440   11892 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0507 11:13:48.868141   11892 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-069000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0507 11:13:48.868147   11892 kubeadm.go:1154] stopping kube-system containers ...
	I0507 11:13:48.868187   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 11:13:48.878606   11892 docker.go:483] Stopping containers: [2cb73641d9d8 863c2a33feb6 c1225d1b2bab a3e5338202fe f78da15e98b0 b050dd24f9a8 94b863037f9c 9023fe75c28f]
	I0507 11:13:48.878671   11892 ssh_runner.go:195] Run: docker stop 2cb73641d9d8 863c2a33feb6 c1225d1b2bab a3e5338202fe f78da15e98b0 b050dd24f9a8 94b863037f9c 9023fe75c28f
	I0507 11:13:48.889188   11892 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0507 11:13:48.894689   11892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 11:13:48.897751   11892 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 11:13:48.897760   11892 kubeadm.go:156] found existing configuration files:
	
	I0507 11:13:48.897780   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/admin.conf
	I0507 11:13:48.900366   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0507 11:13:48.900389   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0507 11:13:48.903016   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/kubelet.conf
	I0507 11:13:48.906200   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0507 11:13:48.906226   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0507 11:13:48.908840   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/controller-manager.conf
	I0507 11:13:48.911190   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0507 11:13:48.911211   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0507 11:13:48.914156   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/scheduler.conf
	I0507 11:13:48.916531   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0507 11:13:48.916555   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0507 11:13:48.919103   11892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 11:13:48.922066   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:13:48.944484   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:13:49.567933   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:13:49.705730   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:13:49.738019   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0507 11:13:49.759544   11892 api_server.go:52] waiting for apiserver process to appear ...
	I0507 11:13:49.759622   11892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:13:50.262186   11892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:13:50.762298   11892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:13:50.771239   11892 api_server.go:72] duration metric: took 1.0111095s to wait for apiserver process to appear ...
	I0507 11:13:50.771250   11892 api_server.go:88] waiting for apiserver healthz status ...
	I0507 11:13:50.771260   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:13:55.775418   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:13:55.775471   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:00.777472   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:00.777541   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:05.778976   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:05.779006   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:10.780498   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:10.780541   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:15.781672   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:15.781716   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:20.782863   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:20.782905   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:25.784022   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:25.784047   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:30.785289   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:30.785336   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:35.786812   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:35.786856   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:40.789113   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:40.789150   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:45.790463   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:45.790487   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:50.792597   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:50.792817   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:14:50.810386   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:14:50.810470   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:14:50.823047   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:14:50.823123   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:14:50.834526   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:14:50.834598   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:14:50.845182   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:14:50.845257   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:14:50.855174   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:14:50.855241   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:14:50.865982   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:14:50.866061   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:14:50.875949   11892 logs.go:276] 0 containers: []
	W0507 11:14:50.875964   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:14:50.876032   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:14:50.889027   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:14:50.889044   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:14:50.889050   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:14:50.903176   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:14:50.903186   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:14:50.945531   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:14:50.945545   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:14:50.957086   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:14:50.957096   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:14:50.981150   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:14:50.981159   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:14:50.985137   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:14:50.985147   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:14:50.996582   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:14:50.996595   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:14:51.007570   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:14:51.007581   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:14:51.110388   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:14:51.110402   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:14:51.121870   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:14:51.121882   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:14:51.135822   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:14:51.135836   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:14:51.147618   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:14:51.147630   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:14:51.165738   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:14:51.165750   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:14:51.178488   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:14:51.178501   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:14:51.216603   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:14:51.216616   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:14:51.230377   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:14:51.230389   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:14:51.245445   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:14:51.245461   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:14:53.763002   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:14:58.765222   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0507 11:14:58.765309   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:14:58.779482   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:14:58.779584   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:14:58.790419   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:14:58.790484   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:14:58.800523   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:14:58.800593   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:14:58.811158   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:14:58.811229   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:14:58.821492   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:14:58.821562   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:14:58.831906   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:14:58.831973   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:14:58.842304   11892 logs.go:276] 0 containers: []
	W0507 11:14:58.842316   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:14:58.842375   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:14:58.855314   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:14:58.855341   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:14:58.855348   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:14:58.859938   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:14:58.859944   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:14:58.873515   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:14:58.873526   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:14:58.911864   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:14:58.911877   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:14:58.926055   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:14:58.926065   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:14:58.937374   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:14:58.937387   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:14:58.951887   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:14:58.951896   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:14:58.964059   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:14:58.964070   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:14:59.000778   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:14:59.000788   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:14:59.039269   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:14:59.039280   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:14:59.058025   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:14:59.058036   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:14:59.071750   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:14:59.071762   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:14:59.082910   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:14:59.082922   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:14:59.108090   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:14:59.108096   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:14:59.125452   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:14:59.125462   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:14:59.136148   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:14:59.136161   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:14:59.153537   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:14:59.153551   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:01.667525   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:06.669731   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:06.669984   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:06.686934   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:06.687021   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:06.699703   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:06.699773   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:06.711095   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:06.711156   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:06.721479   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:06.721546   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:06.732061   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:06.732132   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:06.743026   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:06.743091   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:06.754295   11892 logs.go:276] 0 containers: []
	W0507 11:15:06.754307   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:06.754365   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:06.765708   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:06.765725   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:06.765732   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:06.803164   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:06.803177   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:06.815204   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:06.815214   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:06.834362   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:06.834373   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:06.845738   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:06.845750   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:06.880000   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:06.880014   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:06.891631   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:06.891641   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:06.903305   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:06.903318   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:06.917073   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:06.917086   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:06.932348   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:06.932359   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:06.944051   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:06.944065   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:06.968436   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:06.968447   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:06.980069   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:06.980082   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:06.994446   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:06.994458   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:06.998528   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:06.998536   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:07.013103   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:07.013116   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:07.026343   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:07.026352   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:09.564554   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:14.566659   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:14.566771   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:14.579134   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:14.579205   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:14.592285   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:14.592356   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:14.602991   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:14.603059   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:14.614872   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:14.614939   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:14.624773   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:14.624841   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:14.635262   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:14.635339   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:14.645478   11892 logs.go:276] 0 containers: []
	W0507 11:15:14.645490   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:14.645546   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:14.656150   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:14.656166   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:14.656170   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:14.670364   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:14.673370   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:14.687683   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:14.687693   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:14.725797   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:14.725809   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:14.764097   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:14.764107   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:14.779196   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:14.779207   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:14.797434   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:14.797448   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:14.822891   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:14.822899   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:14.834378   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:14.834389   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:14.838398   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:14.838406   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:14.852450   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:14.852459   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:14.867348   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:14.867363   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:14.879070   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:14.879081   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:14.890368   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:14.890379   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:14.926878   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:14.926889   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:14.944796   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:14.944807   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:14.957790   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:14.957804   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:17.469735   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:22.471829   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:22.471909   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:22.485008   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:22.485077   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:22.503907   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:22.503979   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:22.518308   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:22.518378   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:22.528316   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:22.528382   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:22.538701   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:22.538773   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:22.549135   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:22.549208   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:22.559354   11892 logs.go:276] 0 containers: []
	W0507 11:15:22.559367   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:22.559426   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:22.569887   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:22.569905   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:22.569910   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:22.589593   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:22.589608   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:22.606725   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:22.606736   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:22.617766   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:22.617777   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:22.655386   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:22.655402   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:22.659987   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:22.659996   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:22.673888   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:22.673904   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:22.685459   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:22.685471   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:22.699011   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:22.699022   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:22.722628   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:22.722639   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:22.736751   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:22.736761   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:22.751772   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:22.751782   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:22.764894   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:22.764907   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:22.780960   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:22.780969   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:22.792576   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:22.792589   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:22.826573   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:22.826582   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:22.864152   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:22.864163   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:25.379919   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:30.382337   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:30.382573   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:30.405208   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:30.405301   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:30.422750   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:30.422826   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:30.435229   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:30.435296   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:30.446136   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:30.446200   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:30.456570   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:30.456639   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:30.468362   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:30.468432   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:30.479021   11892 logs.go:276] 0 containers: []
	W0507 11:15:30.479033   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:30.479087   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:30.493228   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:30.493246   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:30.493252   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:30.530050   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:30.530060   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:30.534299   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:30.534308   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:30.553620   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:30.553630   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:30.568076   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:30.568087   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:30.579951   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:30.579962   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:30.598039   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:30.598049   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:30.613971   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:30.613982   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:30.638350   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:30.638357   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:30.652164   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:30.652175   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:30.666182   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:30.666192   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:30.677772   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:30.677784   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:30.689147   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:30.689159   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:30.700696   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:30.700706   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:30.738787   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:30.738794   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:30.772987   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:30.773000   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:30.787823   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:30.787836   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:33.301511   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:38.303590   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:38.303786   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:38.319444   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:38.319529   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:38.331435   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:38.331519   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:38.346176   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:38.346245   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:38.356898   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:38.356966   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:38.370083   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:38.370154   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:38.380681   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:38.380756   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:38.396047   11892 logs.go:276] 0 containers: []
	W0507 11:15:38.396058   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:38.396119   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:38.408490   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:38.408508   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:38.408514   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:38.422377   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:38.422391   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:38.458418   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:38.458428   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:38.478292   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:38.478304   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:38.492769   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:38.492779   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:38.512706   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:38.512720   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:38.523904   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:38.523916   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:38.535956   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:38.535969   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:38.573351   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:38.573365   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:38.577531   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:38.577538   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:38.610725   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:38.610736   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:38.622207   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:38.622216   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:38.640272   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:38.640286   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:38.652200   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:38.652213   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:38.663412   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:38.663424   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:38.676069   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:38.676079   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:38.691244   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:38.691254   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:41.216658   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:46.218856   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:46.218988   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:46.231026   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:46.231111   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:46.241861   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:46.241931   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:46.252619   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:46.252687   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:46.263177   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:46.263240   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:46.273902   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:46.273973   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:46.284964   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:46.285027   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:46.295353   11892 logs.go:276] 0 containers: []
	W0507 11:15:46.295366   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:46.295424   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:46.305933   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:46.305949   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:46.305955   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:46.344297   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:46.344308   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:46.364951   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:46.364962   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:46.379960   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:46.379971   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:46.398053   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:46.398065   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:46.417284   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:46.417295   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:46.455796   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:46.455808   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:46.460266   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:46.460273   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:46.473885   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:46.473898   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:46.490404   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:46.490414   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:46.503379   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:46.503390   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:46.541872   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:46.541887   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:46.559524   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:46.559539   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:46.572111   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:46.572122   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:46.595908   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:46.595924   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:46.610893   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:46.610905   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:46.622822   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:46.622837   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:49.139818   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:15:54.141846   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:15:54.142134   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:15:54.168145   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:15:54.168267   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:15:54.185268   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:15:54.185351   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:15:54.199166   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:15:54.199248   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:15:54.215138   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:15:54.215209   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:15:54.225787   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:15:54.225856   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:15:54.239222   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:15:54.239290   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:15:54.251959   11892 logs.go:276] 0 containers: []
	W0507 11:15:54.251972   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:15:54.252033   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:15:54.269781   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:15:54.269799   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:15:54.269805   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:15:54.304532   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:15:54.304543   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:15:54.341403   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:15:54.341415   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:15:54.355807   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:15:54.355818   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:15:54.379048   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:15:54.379060   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:15:54.391282   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:15:54.391294   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:15:54.431576   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:15:54.431590   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:15:54.443351   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:15:54.443364   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:15:54.456661   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:15:54.456672   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:15:54.468170   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:15:54.468183   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:15:54.479996   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:15:54.480008   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:15:54.503569   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:15:54.503577   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:15:54.514351   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:15:54.514362   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:15:54.528760   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:15:54.528774   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:15:54.564969   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:15:54.564980   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:15:54.579608   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:15:54.579619   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:15:54.584250   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:15:54.584258   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:15:57.101098   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:02.103389   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:02.103629   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:02.127822   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:02.127925   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:02.142082   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:02.142158   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:02.153844   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:02.153915   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:02.164347   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:02.164415   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:02.174625   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:02.174692   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:02.185223   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:02.185292   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:02.198459   11892 logs.go:276] 0 containers: []
	W0507 11:16:02.198472   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:02.198528   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:02.209328   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:02.209346   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:02.209351   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:02.221519   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:02.221530   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:02.237773   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:02.237783   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:02.248590   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:02.248603   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:02.272612   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:02.272620   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:02.312216   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:02.312231   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:02.331058   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:02.331070   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:02.342001   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:02.342013   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:02.354172   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:02.354186   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:02.371341   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:02.371351   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:02.384623   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:02.384633   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:02.396345   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:02.396357   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:02.433352   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:02.433363   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:02.467072   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:02.467082   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:02.482673   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:02.482682   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:02.496436   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:02.496447   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:02.500526   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:02.500534   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:05.017057   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:10.019165   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:10.019277   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:10.030460   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:10.030541   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:10.041073   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:10.041136   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:10.051734   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:10.051810   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:10.062120   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:10.062182   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:10.074885   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:10.074967   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:10.086534   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:10.086628   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:10.096877   11892 logs.go:276] 0 containers: []
	W0507 11:16:10.096888   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:10.096944   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:10.107078   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:10.107097   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:10.107102   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:10.126764   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:10.126776   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:10.162387   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:10.162400   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:10.176357   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:10.176368   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:10.213844   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:10.213854   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:10.225022   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:10.225035   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:10.237173   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:10.237184   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:10.251628   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:10.251638   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:10.266418   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:10.266432   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:10.279764   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:10.279778   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:10.291105   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:10.291117   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:10.329014   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:10.329030   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:10.346169   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:10.346179   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:10.371280   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:10.371288   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:10.375352   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:10.375360   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:10.391516   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:10.391526   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:10.405576   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:10.405587   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:12.918664   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:17.920753   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:17.920918   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:17.934381   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:17.934467   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:17.946003   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:17.946080   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:17.956064   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:17.956139   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:17.966617   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:17.966689   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:17.977506   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:17.977571   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:17.987908   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:17.987973   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:18.002773   11892 logs.go:276] 0 containers: []
	W0507 11:16:18.002786   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:18.002845   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:18.012961   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:18.012979   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:18.012984   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:18.024451   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:18.024460   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:18.039129   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:18.039139   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:18.050673   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:18.050682   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:18.062057   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:18.062066   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:18.074030   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:18.074042   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:18.111738   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:18.111747   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:18.125229   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:18.125239   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:18.167941   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:18.167953   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:18.182099   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:18.182108   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:18.220517   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:18.220527   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:18.232300   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:18.232313   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:18.257188   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:18.257199   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:18.271246   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:18.271258   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:18.275726   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:18.275733   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:18.290266   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:18.290276   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:18.301245   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:18.301256   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:20.820584   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:25.822768   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:25.822981   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:25.847967   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:25.848056   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:25.862967   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:25.863041   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:25.873759   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:25.873827   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:25.884192   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:25.884263   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:25.894458   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:25.894525   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:25.904828   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:25.904897   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:25.914573   11892 logs.go:276] 0 containers: []
	W0507 11:16:25.914585   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:25.914638   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:25.924943   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:25.924964   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:25.924970   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:25.936365   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:25.936377   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:25.949726   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:25.949737   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:25.987701   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:25.987711   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:26.002999   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:26.003011   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:26.015011   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:26.015023   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:26.019525   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:26.019532   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:26.033848   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:26.033858   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:26.049512   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:26.049524   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:26.073646   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:26.073653   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:26.086080   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:26.086091   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:26.121398   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:26.121413   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:26.133253   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:26.133265   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:26.150504   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:26.150514   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:26.162629   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:26.162638   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:26.199179   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:26.199188   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:26.213863   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:26.213872   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:28.729630   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:33.731901   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:33.732287   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:33.761552   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:33.761681   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:33.779264   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:33.779359   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:33.793478   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:33.793554   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:33.812463   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:33.812537   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:33.823020   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:33.823094   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:33.833806   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:33.833883   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:33.845270   11892 logs.go:276] 0 containers: []
	W0507 11:16:33.845282   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:33.845343   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:33.860712   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:33.860733   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:33.860739   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:33.878714   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:33.878727   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:33.890596   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:33.890609   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:33.902924   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:33.902935   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:33.925782   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:33.925788   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:33.961384   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:33.961391   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:33.974699   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:33.974710   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:33.985896   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:33.985908   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:33.997396   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:33.997407   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:34.009558   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:34.009569   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:34.022853   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:34.022863   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:34.027173   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:34.027182   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:34.060661   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:34.060672   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:34.103955   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:34.103967   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:34.117367   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:34.117377   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:34.136161   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:34.136175   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:34.148632   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:34.148642   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:36.665581   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:41.667856   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:41.668328   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:41.709811   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:41.709961   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:41.730683   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:41.730787   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:41.745408   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:41.745487   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:41.757773   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:41.757847   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:41.768280   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:41.768350   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:41.779704   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:41.779781   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:41.789970   11892 logs.go:276] 0 containers: []
	W0507 11:16:41.789985   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:41.790047   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:41.800499   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:41.800517   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:41.800522   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:41.812138   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:41.812151   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:41.829050   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:41.829059   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:41.841819   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:41.841832   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:41.856140   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:41.856155   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:41.895584   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:41.895597   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:41.911460   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:41.911472   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:41.923080   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:41.923093   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:41.937902   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:41.937913   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:41.949148   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:41.949160   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:41.972770   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:41.972778   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:41.984227   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:41.984237   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:41.997433   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:41.997446   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:42.011426   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:42.011436   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:42.023162   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:42.023177   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:42.061732   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:42.061742   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:42.066530   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:42.066538   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:44.604353   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:49.606790   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:49.607072   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:49.634250   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:49.634357   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:49.650454   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:49.650536   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:49.663932   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:49.664010   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:49.674618   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:49.674684   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:49.685131   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:49.685187   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:49.695388   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:49.695457   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:49.705514   11892 logs.go:276] 0 containers: []
	W0507 11:16:49.705523   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:49.705575   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:49.716033   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:49.716051   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:49.716057   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:49.720306   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:49.720315   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:49.755008   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:49.755022   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:49.792546   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:49.792557   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:49.817768   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:49.817782   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:49.837600   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:49.837611   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:49.850212   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:49.850223   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:49.862775   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:49.862785   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:16:49.876214   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:49.876225   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:49.891355   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:49.891364   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:49.915050   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:49.915058   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:49.928706   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:49.928718   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:49.968931   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:49.968942   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:49.983239   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:49.983249   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:50.006983   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:50.006995   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:50.020450   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:50.020462   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:50.031933   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:50.031946   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:52.545708   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:16:57.547919   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:16:57.548128   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:16:57.562951   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:16:57.563032   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:16:57.574731   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:16:57.574804   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:16:57.585611   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:16:57.585674   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:16:57.596452   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:16:57.596525   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:16:57.606956   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:16:57.607028   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:16:57.622183   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:16:57.622257   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:16:57.632042   11892 logs.go:276] 0 containers: []
	W0507 11:16:57.632055   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:16:57.632116   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:16:57.642396   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:16:57.642414   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:16:57.642420   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:16:57.653941   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:16:57.653954   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:16:57.676688   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:16:57.676696   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:16:57.689317   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:16:57.689330   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:16:57.707489   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:16:57.707505   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:16:57.719271   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:16:57.719281   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:16:57.730523   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:16:57.730536   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:16:57.745118   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:16:57.745128   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:16:57.759998   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:16:57.760009   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:16:57.772245   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:16:57.772256   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:16:57.785733   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:16:57.785743   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:16:57.790471   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:16:57.790481   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:16:57.824654   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:16:57.824668   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:16:57.862899   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:16:57.862914   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:16:57.874814   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:16:57.874827   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:16:57.913174   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:16:57.913182   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:16:57.927796   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:16:57.927807   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:17:00.443172   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:05.445360   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:05.445571   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:05.464959   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:17:05.465057   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:05.480599   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:17:05.480678   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:05.492193   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:17:05.492266   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:05.502720   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:17:05.502796   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:05.513255   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:17:05.513322   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:05.523898   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:17:05.523972   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:05.534028   11892 logs.go:276] 0 containers: []
	W0507 11:17:05.534041   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:05.534098   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:05.544508   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:17:05.544526   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:05.544531   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:05.549068   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:17:05.549076   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:17:05.563480   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:17:05.563491   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:17:05.601205   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:17:05.601219   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:17:05.620313   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:17:05.620324   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:17:05.635040   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:17:05.635052   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:17:05.649063   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:05.649074   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:05.673510   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:17:05.673521   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:05.685307   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:05.685317   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:05.723811   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:05.723823   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:05.758885   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:17:05.758897   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:17:05.774147   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:17:05.774160   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:17:05.785450   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:17:05.785460   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:17:05.797422   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:17:05.797433   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:17:05.809595   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:17:05.809608   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:17:05.821368   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:17:05.821379   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:17:05.839217   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:17:05.839235   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:17:08.354431   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:13.356509   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:13.356688   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:13.370962   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:17:13.371038   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:13.382715   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:17:13.382783   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:13.393966   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:17:13.394037   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:13.406746   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:17:13.406824   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:13.417039   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:17:13.417108   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:13.427429   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:17:13.427491   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:13.437294   11892 logs.go:276] 0 containers: []
	W0507 11:17:13.437308   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:13.437367   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:13.447636   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:17:13.447653   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:17:13.447660   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:17:13.458829   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:17:13.458840   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:17:13.470844   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:17:13.470855   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:17:13.482890   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:17:13.482902   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:17:13.494286   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:13.494298   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:13.519336   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:13.519350   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:13.557173   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:17:13.557181   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:17:13.571733   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:17:13.571744   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:17:13.585653   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:17:13.585664   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:17:13.599478   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:17:13.599489   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:13.611168   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:13.611181   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:13.643952   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:17:13.643969   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:17:13.658123   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:17:13.658133   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:17:13.672444   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:17:13.672454   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:17:13.689538   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:17:13.689549   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:17:13.703603   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:13.703613   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:13.707484   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:17:13.707490   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:17:16.246967   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:21.248995   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:21.249094   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:21.265767   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:17:21.265841   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:21.279172   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:17:21.279249   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:21.289686   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:17:21.289750   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:21.299782   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:17:21.299858   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:21.310138   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:17:21.310213   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:21.321008   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:17:21.321075   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:21.331465   11892 logs.go:276] 0 containers: []
	W0507 11:17:21.331477   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:21.331536   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:21.341905   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:17:21.341924   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:21.341929   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:21.346071   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:21.346079   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:21.379604   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:17:21.379616   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:17:21.394943   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:17:21.394955   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:17:21.408724   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:21.408735   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:21.446772   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:17:21.446781   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:17:21.460946   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:17:21.460956   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:17:21.472093   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:17:21.472104   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:17:21.483976   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:17:21.483989   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:17:21.500464   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:17:21.500475   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:21.512365   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:17:21.512378   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:17:21.526490   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:17:21.526503   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:17:21.539838   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:17:21.539849   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:17:21.551670   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:17:21.551680   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:17:21.565332   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:17:21.565364   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:17:21.602080   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:17:21.602091   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:17:21.613382   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:21.613393   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:24.137392   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:29.138302   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:29.138569   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:29.163015   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:17:29.163138   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:29.179281   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:17:29.179365   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:29.193525   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:17:29.193606   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:29.206575   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:17:29.206648   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:29.218472   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:17:29.218544   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:29.230076   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:17:29.230144   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:29.240161   11892 logs.go:276] 0 containers: []
	W0507 11:17:29.240172   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:29.240227   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:29.250947   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:17:29.250965   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:17:29.250971   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:17:29.267946   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:17:29.267956   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:17:29.279371   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:29.279385   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:29.301398   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:17:29.301407   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:17:29.315098   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:17:29.315112   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:17:29.328896   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:17:29.328910   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:17:29.345066   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:17:29.345077   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:17:29.359766   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:17:29.359778   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:17:29.371967   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:29.371977   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:29.408661   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:17:29.408673   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:17:29.423420   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:17:29.423431   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:17:29.435058   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:17:29.435069   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:29.446887   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:29.446900   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:29.482988   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:17:29.483001   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:17:29.519706   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:17:29.519719   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:17:29.533401   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:29.533428   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:29.538001   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:17:29.538010   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:17:32.062247   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:37.064662   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:37.065072   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:37.099858   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:17:37.099985   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:37.120837   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:17:37.120939   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:37.135739   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:17:37.135824   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:37.148633   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:17:37.148705   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:37.159574   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:17:37.159637   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:37.171580   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:17:37.171643   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:37.182124   11892 logs.go:276] 0 containers: []
	W0507 11:17:37.182136   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:37.182195   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:37.192823   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:17:37.192841   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:17:37.192846   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:17:37.204760   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:17:37.204770   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:17:37.220051   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:17:37.220063   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:17:37.234105   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:17:37.234119   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:17:37.262230   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:37.262242   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:37.311469   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:17:37.311483   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:17:37.326131   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:17:37.326143   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:17:37.343578   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:37.343592   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:37.366504   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:37.366513   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:37.404971   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:17:37.404986   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:17:37.442544   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:17:37.442559   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:37.454124   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:37.454138   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:37.458489   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:17:37.458496   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:17:37.472336   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:17:37.472351   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:17:37.483532   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:17:37.483545   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:17:37.495618   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:17:37.495629   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:17:37.513075   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:17:37.513085   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:17:40.029246   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:45.031400   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:45.031599   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:17:45.049162   11892 logs.go:276] 2 containers: [620a2e8b5642 a3e5338202fe]
	I0507 11:17:45.049268   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:17:45.062978   11892 logs.go:276] 2 containers: [0d973c8e62e6 863c2a33feb6]
	I0507 11:17:45.063044   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:17:45.074504   11892 logs.go:276] 1 containers: [b1cf49938bb2]
	I0507 11:17:45.074573   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:17:45.087895   11892 logs.go:276] 2 containers: [a4618b4aaa6f f78da15e98b0]
	I0507 11:17:45.087962   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:17:45.103080   11892 logs.go:276] 1 containers: [f8172c370eef]
	I0507 11:17:45.103149   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:17:45.113743   11892 logs.go:276] 2 containers: [dea08e34169f 2cb73641d9d8]
	I0507 11:17:45.113808   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:17:45.124605   11892 logs.go:276] 0 containers: []
	W0507 11:17:45.124616   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:17:45.124671   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:17:45.135293   11892 logs.go:276] 2 containers: [8322d074effa 139a6ed9c230]
	I0507 11:17:45.135311   11892 logs.go:123] Gathering logs for etcd [0d973c8e62e6] ...
	I0507 11:17:45.135317   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d973c8e62e6"
	I0507 11:17:45.149143   11892 logs.go:123] Gathering logs for etcd [863c2a33feb6] ...
	I0507 11:17:45.149155   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 863c2a33feb6"
	I0507 11:17:45.163724   11892 logs.go:123] Gathering logs for kube-controller-manager [dea08e34169f] ...
	I0507 11:17:45.163735   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dea08e34169f"
	I0507 11:17:45.183435   11892 logs.go:123] Gathering logs for kube-controller-manager [2cb73641d9d8] ...
	I0507 11:17:45.183447   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cb73641d9d8"
	I0507 11:17:45.197329   11892 logs.go:123] Gathering logs for storage-provisioner [8322d074effa] ...
	I0507 11:17:45.197343   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322d074effa"
	I0507 11:17:45.208814   11892 logs.go:123] Gathering logs for kube-apiserver [620a2e8b5642] ...
	I0507 11:17:45.208827   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 620a2e8b5642"
	I0507 11:17:45.222355   11892 logs.go:123] Gathering logs for kube-scheduler [a4618b4aaa6f] ...
	I0507 11:17:45.222366   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4618b4aaa6f"
	I0507 11:17:45.236160   11892 logs.go:123] Gathering logs for kube-proxy [f8172c370eef] ...
	I0507 11:17:45.236172   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8172c370eef"
	I0507 11:17:45.247626   11892 logs.go:123] Gathering logs for kube-apiserver [a3e5338202fe] ...
	I0507 11:17:45.247637   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e5338202fe"
	I0507 11:17:45.318711   11892 logs.go:123] Gathering logs for kube-scheduler [f78da15e98b0] ...
	I0507 11:17:45.318732   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f78da15e98b0"
	I0507 11:17:45.334922   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:17:45.334934   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:17:45.356295   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:17:45.356302   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:17:45.392982   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:17:45.392993   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:17:45.399456   11892 logs.go:123] Gathering logs for coredns [b1cf49938bb2] ...
	I0507 11:17:45.399464   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1cf49938bb2"
	I0507 11:17:45.411385   11892 logs.go:123] Gathering logs for storage-provisioner [139a6ed9c230] ...
	I0507 11:17:45.411398   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 139a6ed9c230"
	I0507 11:17:45.423111   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:17:45.423122   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:17:45.434643   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:17:45.434655   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 11:17:47.974419   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:17:52.976624   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:17:52.976702   11892 kubeadm.go:591] duration metric: took 4m4.11202s to restartPrimaryControlPlane
	W0507 11:17:52.976769   11892 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0507 11:17:52.976802   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0507 11:17:54.057033   11892 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.080251167s)
	I0507 11:17:54.057110   11892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 11:17:54.062399   11892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 11:17:54.065394   11892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 11:17:54.068181   11892 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 11:17:54.068186   11892 kubeadm.go:156] found existing configuration files:
	
	I0507 11:17:54.068204   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/admin.conf
	I0507 11:17:54.070658   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0507 11:17:54.070683   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0507 11:17:54.073781   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/kubelet.conf
	I0507 11:17:54.076977   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0507 11:17:54.076999   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0507 11:17:54.079814   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/controller-manager.conf
	I0507 11:17:54.082239   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0507 11:17:54.082264   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0507 11:17:54.085420   11892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/scheduler.conf
	I0507 11:17:54.088414   11892 kubeadm.go:162] "https://control-plane.minikube.internal:51472" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51472 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0507 11:17:54.088435   11892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0507 11:17:54.090851   11892 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0507 11:17:54.108958   11892 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0507 11:17:54.108987   11892 kubeadm.go:309] [preflight] Running pre-flight checks
	I0507 11:17:54.162665   11892 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0507 11:17:54.162734   11892 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0507 11:17:54.162780   11892 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0507 11:17:54.211616   11892 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0507 11:17:54.215863   11892 out.go:204]   - Generating certificates and keys ...
	I0507 11:17:54.215897   11892 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0507 11:17:54.215926   11892 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0507 11:17:54.215960   11892 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0507 11:17:54.215987   11892 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0507 11:17:54.216018   11892 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0507 11:17:54.216064   11892 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0507 11:17:54.216093   11892 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0507 11:17:54.216138   11892 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0507 11:17:54.216241   11892 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0507 11:17:54.216299   11892 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0507 11:17:54.216317   11892 kubeadm.go:309] [certs] Using the existing "sa" key
	I0507 11:17:54.216348   11892 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0507 11:17:54.362365   11892 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0507 11:17:54.581324   11892 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0507 11:17:54.655785   11892 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0507 11:17:54.700386   11892 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0507 11:17:54.729363   11892 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 11:17:54.729782   11892 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 11:17:54.729855   11892 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0507 11:17:54.819454   11892 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0507 11:17:54.823949   11892 out.go:204]   - Booting up control plane ...
	I0507 11:17:54.824007   11892 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0507 11:17:54.824048   11892 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0507 11:17:54.824092   11892 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0507 11:17:54.824138   11892 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0507 11:17:54.824257   11892 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0507 11:17:59.329139   11892 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.505597 seconds
	I0507 11:17:59.329243   11892 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0507 11:17:59.333119   11892 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0507 11:17:59.851531   11892 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0507 11:17:59.851828   11892 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-069000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0507 11:18:00.354764   11892 kubeadm.go:309] [bootstrap-token] Using token: 8r216w.1dkq7l997m0tj7pp
	I0507 11:18:00.356514   11892 out.go:204]   - Configuring RBAC rules ...
	I0507 11:18:00.356578   11892 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0507 11:18:00.364015   11892 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0507 11:18:00.365848   11892 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0507 11:18:00.366606   11892 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0507 11:18:00.367358   11892 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0507 11:18:00.368256   11892 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0507 11:18:00.371176   11892 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0507 11:18:00.554548   11892 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0507 11:18:00.766330   11892 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0507 11:18:00.766818   11892 kubeadm.go:309] 
	I0507 11:18:00.766849   11892 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0507 11:18:00.766852   11892 kubeadm.go:309] 
	I0507 11:18:00.766885   11892 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0507 11:18:00.766889   11892 kubeadm.go:309] 
	I0507 11:18:00.766901   11892 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0507 11:18:00.766926   11892 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0507 11:18:00.766953   11892 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0507 11:18:00.766957   11892 kubeadm.go:309] 
	I0507 11:18:00.766981   11892 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0507 11:18:00.766983   11892 kubeadm.go:309] 
	I0507 11:18:00.767004   11892 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0507 11:18:00.767007   11892 kubeadm.go:309] 
	I0507 11:18:00.767030   11892 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0507 11:18:00.767061   11892 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0507 11:18:00.767104   11892 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0507 11:18:00.767111   11892 kubeadm.go:309] 
	I0507 11:18:00.767155   11892 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0507 11:18:00.767207   11892 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0507 11:18:00.767213   11892 kubeadm.go:309] 
	I0507 11:18:00.767269   11892 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 8r216w.1dkq7l997m0tj7pp \
	I0507 11:18:00.767328   11892 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f29a4f73b331d4cf393206b18522ccd48b2106136bcb0164e83081b123d8ccc \
	I0507 11:18:00.767342   11892 kubeadm.go:309] 	--control-plane 
	I0507 11:18:00.767345   11892 kubeadm.go:309] 
	I0507 11:18:00.767417   11892 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0507 11:18:00.767423   11892 kubeadm.go:309] 
	I0507 11:18:00.767468   11892 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 8r216w.1dkq7l997m0tj7pp \
	I0507 11:18:00.767586   11892 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4f29a4f73b331d4cf393206b18522ccd48b2106136bcb0164e83081b123d8ccc 
	I0507 11:18:00.767699   11892 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0507 11:18:00.767709   11892 cni.go:84] Creating CNI manager for ""
	I0507 11:18:00.767717   11892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:18:00.771519   11892 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0507 11:18:00.778508   11892 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0507 11:18:00.782383   11892 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0507 11:18:00.787111   11892 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0507 11:18:00.787166   11892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 11:18:00.787227   11892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-069000 minikube.k8s.io/updated_at=2024_05_07T11_18_00_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=stopped-upgrade-069000 minikube.k8s.io/primary=true
	I0507 11:18:00.792545   11892 ops.go:34] apiserver oom_adj: -16
	I0507 11:18:00.828767   11892 kubeadm.go:1107] duration metric: took 41.647709ms to wait for elevateKubeSystemPrivileges
	W0507 11:18:00.828793   11892 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0507 11:18:00.828799   11892 kubeadm.go:393] duration metric: took 4m11.977539541s to StartCluster
	I0507 11:18:00.828808   11892 settings.go:142] acquiring lock: {Name:mk50bfcfedcd3b99aacdbeb1994dffd265fa3e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:18:00.828893   11892 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:18:00.829330   11892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/kubeconfig: {Name:mk2a7794036857fd378216b160722b418b125ac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:18:00.829547   11892 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:18:00.833466   11892 out.go:177] * Verifying Kubernetes components...
	I0507 11:18:00.829554   11892 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0507 11:18:00.829640   11892 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:18:00.841473   11892 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-069000"
	I0507 11:18:00.841481   11892 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-069000"
	I0507 11:18:00.841491   11892 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-069000"
	I0507 11:18:00.841495   11892 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-069000"
	I0507 11:18:00.841476   11892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0507 11:18:00.841496   11892 addons.go:243] addon storage-provisioner should already be in state true
	I0507 11:18:00.841543   11892 host.go:66] Checking if "stopped-upgrade-069000" exists ...
	I0507 11:18:00.842729   11892 kapi.go:59] client config for stopped-upgrade-069000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/stopped-upgrade-069000/client.key", CAFile:"/Users/jenkins/minikube-integration/18804-8175/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102317d80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 11:18:00.842859   11892 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-069000"
	W0507 11:18:00.842864   11892 addons.go:243] addon default-storageclass should already be in state true
	I0507 11:18:00.842871   11892 host.go:66] Checking if "stopped-upgrade-069000" exists ...
	I0507 11:18:00.847402   11892 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 11:18:00.851540   11892 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 11:18:00.851548   11892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0507 11:18:00.851555   11892 sshutil.go:53] new ssh client: &{IP:localhost Port:51437 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/id_rsa Username:docker}
	I0507 11:18:00.852277   11892 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0507 11:18:00.852282   11892 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0507 11:18:00.852286   11892 sshutil.go:53] new ssh client: &{IP:localhost Port:51437 SSHKeyPath:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/stopped-upgrade-069000/id_rsa Username:docker}
	I0507 11:18:00.937177   11892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 11:18:00.942340   11892 api_server.go:52] waiting for apiserver process to appear ...
	I0507 11:18:00.942383   11892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 11:18:00.946272   11892 api_server.go:72] duration metric: took 116.717ms to wait for apiserver process to appear ...
	I0507 11:18:00.946281   11892 api_server.go:88] waiting for apiserver healthz status ...
	I0507 11:18:00.946289   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:00.956928   11892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0507 11:18:00.958041   11892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 11:18:05.948346   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:05.948406   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:10.948763   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:10.948785   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:15.949098   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:15.949141   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:20.949529   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:20.949569   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:25.950403   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:25.950444   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:30.951218   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:30.951238   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0507 11:18:31.342424   11892 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0507 11:18:31.347306   11892 out.go:177] * Enabled addons: storage-provisioner
	I0507 11:18:31.355486   11892 addons.go:505] duration metric: took 30.526810334s for enable addons: enabled=[storage-provisioner]
	I0507 11:18:35.952225   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:35.952267   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:40.953695   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:40.953715   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:45.954207   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:45.954256   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:50.956181   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:50.956224   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:18:55.958330   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:18:55.958372   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:19:00.959513   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:19:00.959666   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:19:00.970920   11892 logs.go:276] 1 containers: [75a802f96205]
	I0507 11:19:00.970995   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:19:00.981468   11892 logs.go:276] 1 containers: [1fd8756a7c65]
	I0507 11:19:00.981539   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:19:00.992331   11892 logs.go:276] 2 containers: [d03b1e6e0398 85d957baa1b3]
	I0507 11:19:00.992405   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:19:01.002817   11892 logs.go:276] 1 containers: [5bdac9fb6ee0]
	I0507 11:19:01.002885   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:19:01.016823   11892 logs.go:276] 1 containers: [749c2128c4af]
	I0507 11:19:01.016896   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:19:01.027711   11892 logs.go:276] 1 containers: [6cef98e1a7a8]
	I0507 11:19:01.027780   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:19:01.038320   11892 logs.go:276] 0 containers: []
	W0507 11:19:01.038333   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:19:01.038414   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:19:01.051277   11892 logs.go:276] 1 containers: [fd4f39baf1d4]
	I0507 11:19:01.051294   11892 logs.go:123] Gathering logs for kube-proxy [749c2128c4af] ...
	I0507 11:19:01.051300   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749c2128c4af"
	I0507 11:19:01.063487   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:19:01.063498   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:19:01.086821   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:19:01.086832   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0507 11:19:01.118456   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:19:01.118548   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:19:01.119482   11892 logs.go:123] Gathering logs for kube-apiserver [75a802f96205] ...
	I0507 11:19:01.119486   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a802f96205"
	I0507 11:19:01.133968   11892 logs.go:123] Gathering logs for etcd [1fd8756a7c65] ...
	I0507 11:19:01.133980   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fd8756a7c65"
	I0507 11:19:01.149581   11892 logs.go:123] Gathering logs for coredns [d03b1e6e0398] ...
	I0507 11:19:01.149592   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03b1e6e0398"
	I0507 11:19:01.162085   11892 logs.go:123] Gathering logs for coredns [85d957baa1b3] ...
	I0507 11:19:01.162097   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85d957baa1b3"
	I0507 11:19:01.174119   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:19:01.174130   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:19:01.186183   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:19:01.186196   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:19:01.190434   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:19:01.190440   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:19:01.225460   11892 logs.go:123] Gathering logs for kube-scheduler [5bdac9fb6ee0] ...
	I0507 11:19:01.225473   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bdac9fb6ee0"
	I0507 11:19:01.241904   11892 logs.go:123] Gathering logs for kube-controller-manager [6cef98e1a7a8] ...
	I0507 11:19:01.241917   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef98e1a7a8"
	I0507 11:19:01.265858   11892 logs.go:123] Gathering logs for storage-provisioner [fd4f39baf1d4] ...
	I0507 11:19:01.265869   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd4f39baf1d4"
	I0507 11:19:01.278391   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:19:01.278402   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0507 11:19:01.278426   11892 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0507 11:19:01.278431   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:19:01.278435   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:19:01.278448   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:19:01.278451   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:19:11.282430   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:19:16.284898   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:19:16.285333   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:19:16.327334   11892 logs.go:276] 1 containers: [75a802f96205]
	I0507 11:19:16.327471   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:19:16.350377   11892 logs.go:276] 1 containers: [1fd8756a7c65]
	I0507 11:19:16.350492   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:19:16.367754   11892 logs.go:276] 2 containers: [d03b1e6e0398 85d957baa1b3]
	I0507 11:19:16.367831   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:19:16.380567   11892 logs.go:276] 1 containers: [5bdac9fb6ee0]
	I0507 11:19:16.380633   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:19:16.391437   11892 logs.go:276] 1 containers: [749c2128c4af]
	I0507 11:19:16.391505   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:19:16.402455   11892 logs.go:276] 1 containers: [6cef98e1a7a8]
	I0507 11:19:16.402519   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:19:16.413557   11892 logs.go:276] 0 containers: []
	W0507 11:19:16.413576   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:19:16.413634   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:19:16.424413   11892 logs.go:276] 1 containers: [fd4f39baf1d4]
	I0507 11:19:16.424429   11892 logs.go:123] Gathering logs for coredns [d03b1e6e0398] ...
	I0507 11:19:16.424437   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03b1e6e0398"
	I0507 11:19:16.436395   11892 logs.go:123] Gathering logs for kube-scheduler [5bdac9fb6ee0] ...
	I0507 11:19:16.436407   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bdac9fb6ee0"
	I0507 11:19:16.451871   11892 logs.go:123] Gathering logs for kube-controller-manager [6cef98e1a7a8] ...
	I0507 11:19:16.451881   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef98e1a7a8"
	I0507 11:19:16.478154   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:19:16.478166   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:19:16.502586   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:19:16.502593   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0507 11:19:16.535826   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:19:16.535919   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:19:16.536858   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:19:16.536862   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:19:16.541378   11892 logs.go:123] Gathering logs for kube-apiserver [75a802f96205] ...
	I0507 11:19:16.541384   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a802f96205"
	I0507 11:19:16.556206   11892 logs.go:123] Gathering logs for kube-proxy [749c2128c4af] ...
	I0507 11:19:16.556217   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749c2128c4af"
	I0507 11:19:16.568424   11892 logs.go:123] Gathering logs for storage-provisioner [fd4f39baf1d4] ...
	I0507 11:19:16.568435   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd4f39baf1d4"
	I0507 11:19:16.580272   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:19:16.580282   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:19:16.592139   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:19:16.592151   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:19:16.635800   11892 logs.go:123] Gathering logs for etcd [1fd8756a7c65] ...
	I0507 11:19:16.635815   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fd8756a7c65"
	I0507 11:19:16.650499   11892 logs.go:123] Gathering logs for coredns [85d957baa1b3] ...
	I0507 11:19:16.650510   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85d957baa1b3"
	I0507 11:19:16.662739   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:19:16.662748   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0507 11:19:16.662777   11892 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0507 11:19:16.662782   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:19:16.662786   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:19:16.662791   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:19:16.662806   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:19:26.666754   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:19:31.669183   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:19:31.669630   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:19:31.707376   11892 logs.go:276] 1 containers: [75a802f96205]
	I0507 11:19:31.707514   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:19:31.729310   11892 logs.go:276] 1 containers: [1fd8756a7c65]
	I0507 11:19:31.729410   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:19:31.745277   11892 logs.go:276] 2 containers: [d03b1e6e0398 85d957baa1b3]
	I0507 11:19:31.745359   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:19:31.761368   11892 logs.go:276] 1 containers: [5bdac9fb6ee0]
	I0507 11:19:31.761431   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:19:31.771828   11892 logs.go:276] 1 containers: [749c2128c4af]
	I0507 11:19:31.771896   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:19:31.782112   11892 logs.go:276] 1 containers: [6cef98e1a7a8]
	I0507 11:19:31.782183   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:19:31.792006   11892 logs.go:276] 0 containers: []
	W0507 11:19:31.792018   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:19:31.792074   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:19:31.802114   11892 logs.go:276] 1 containers: [fd4f39baf1d4]
	I0507 11:19:31.802132   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:19:31.802142   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:19:31.825448   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:19:31.825456   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:19:31.837038   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:19:31.837053   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0507 11:19:31.870003   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:19:31.870094   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:19:31.871031   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:19:31.871037   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:19:31.874973   11892 logs.go:123] Gathering logs for kube-apiserver [75a802f96205] ...
	I0507 11:19:31.874978   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a802f96205"
	I0507 11:19:31.889199   11892 logs.go:123] Gathering logs for coredns [d03b1e6e0398] ...
	I0507 11:19:31.889212   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03b1e6e0398"
	I0507 11:19:31.900739   11892 logs.go:123] Gathering logs for kube-scheduler [5bdac9fb6ee0] ...
	I0507 11:19:31.900751   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bdac9fb6ee0"
	I0507 11:19:31.917313   11892 logs.go:123] Gathering logs for storage-provisioner [fd4f39baf1d4] ...
	I0507 11:19:31.917325   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd4f39baf1d4"
	I0507 11:19:31.928535   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:19:31.928549   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:19:31.963528   11892 logs.go:123] Gathering logs for etcd [1fd8756a7c65] ...
	I0507 11:19:31.963540   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fd8756a7c65"
	I0507 11:19:31.977446   11892 logs.go:123] Gathering logs for coredns [85d957baa1b3] ...
	I0507 11:19:31.977457   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85d957baa1b3"
	I0507 11:19:31.989151   11892 logs.go:123] Gathering logs for kube-proxy [749c2128c4af] ...
	I0507 11:19:31.989161   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749c2128c4af"
	I0507 11:19:32.000963   11892 logs.go:123] Gathering logs for kube-controller-manager [6cef98e1a7a8] ...
	I0507 11:19:32.000974   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef98e1a7a8"
	I0507 11:19:32.017924   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:19:32.017934   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0507 11:19:32.017958   11892 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0507 11:19:32.017964   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:19:32.017967   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:19:32.017984   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:19:32.017990   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:19:42.020679   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:19:47.023267   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:19:47.023724   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:19:47.067739   11892 logs.go:276] 1 containers: [75a802f96205]
	I0507 11:19:47.067871   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:19:47.086747   11892 logs.go:276] 1 containers: [1fd8756a7c65]
	I0507 11:19:47.086840   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:19:47.101031   11892 logs.go:276] 2 containers: [d03b1e6e0398 85d957baa1b3]
	I0507 11:19:47.101107   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:19:47.112485   11892 logs.go:276] 1 containers: [5bdac9fb6ee0]
	I0507 11:19:47.112554   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:19:47.123459   11892 logs.go:276] 1 containers: [749c2128c4af]
	I0507 11:19:47.123528   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:19:47.135309   11892 logs.go:276] 1 containers: [6cef98e1a7a8]
	I0507 11:19:47.135374   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:19:47.145627   11892 logs.go:276] 0 containers: []
	W0507 11:19:47.145639   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:19:47.145688   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:19:47.156210   11892 logs.go:276] 1 containers: [fd4f39baf1d4]
	I0507 11:19:47.156227   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:19:47.156232   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0507 11:19:47.189376   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:19:47.189466   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:19:47.190407   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:19:47.190411   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:19:47.194619   11892 logs.go:123] Gathering logs for kube-apiserver [75a802f96205] ...
	I0507 11:19:47.194625   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a802f96205"
	I0507 11:19:47.210544   11892 logs.go:123] Gathering logs for etcd [1fd8756a7c65] ...
	I0507 11:19:47.210553   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fd8756a7c65"
	I0507 11:19:47.223986   11892 logs.go:123] Gathering logs for coredns [85d957baa1b3] ...
	I0507 11:19:47.223999   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85d957baa1b3"
	I0507 11:19:47.235874   11892 logs.go:123] Gathering logs for kube-scheduler [5bdac9fb6ee0] ...
	I0507 11:19:47.235886   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bdac9fb6ee0"
	I0507 11:19:47.256159   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:19:47.256170   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:19:47.280945   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:19:47.280958   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:19:47.315279   11892 logs.go:123] Gathering logs for coredns [d03b1e6e0398] ...
	I0507 11:19:47.315292   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03b1e6e0398"
	I0507 11:19:47.326921   11892 logs.go:123] Gathering logs for kube-proxy [749c2128c4af] ...
	I0507 11:19:47.326935   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749c2128c4af"
	I0507 11:19:47.338271   11892 logs.go:123] Gathering logs for kube-controller-manager [6cef98e1a7a8] ...
	I0507 11:19:47.338283   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef98e1a7a8"
	I0507 11:19:47.356104   11892 logs.go:123] Gathering logs for storage-provisioner [fd4f39baf1d4] ...
	I0507 11:19:47.356114   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd4f39baf1d4"
	I0507 11:19:47.367460   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:19:47.367471   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:19:47.378714   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:19:47.378726   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0507 11:19:47.378752   11892 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0507 11:19:47.378757   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:19:47.378763   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:19:47.378769   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:19:47.378771   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:19:57.382800   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:20:02.385401   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:20:02.385784   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:20:02.425330   11892 logs.go:276] 1 containers: [75a802f96205]
	I0507 11:20:02.425467   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:20:02.446474   11892 logs.go:276] 1 containers: [1fd8756a7c65]
	I0507 11:20:02.446576   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:20:02.461497   11892 logs.go:276] 2 containers: [d03b1e6e0398 85d957baa1b3]
	I0507 11:20:02.461578   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:20:02.474100   11892 logs.go:276] 1 containers: [5bdac9fb6ee0]
	I0507 11:20:02.474169   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:20:02.489067   11892 logs.go:276] 1 containers: [749c2128c4af]
	I0507 11:20:02.489128   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:20:02.499925   11892 logs.go:276] 1 containers: [6cef98e1a7a8]
	I0507 11:20:02.499995   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:20:02.514999   11892 logs.go:276] 0 containers: []
	W0507 11:20:02.515010   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:20:02.515067   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:20:02.525178   11892 logs.go:276] 1 containers: [fd4f39baf1d4]
	I0507 11:20:02.525193   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:20:02.525198   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:20:02.529805   11892 logs.go:123] Gathering logs for coredns [d03b1e6e0398] ...
	I0507 11:20:02.529815   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03b1e6e0398"
	I0507 11:20:02.541357   11892 logs.go:123] Gathering logs for coredns [85d957baa1b3] ...
	I0507 11:20:02.541369   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85d957baa1b3"
	I0507 11:20:02.553422   11892 logs.go:123] Gathering logs for kube-proxy [749c2128c4af] ...
	I0507 11:20:02.553431   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749c2128c4af"
	I0507 11:20:02.565059   11892 logs.go:123] Gathering logs for kube-controller-manager [6cef98e1a7a8] ...
	I0507 11:20:02.565069   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef98e1a7a8"
	I0507 11:20:02.582613   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:20:02.582624   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0507 11:20:02.615140   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:20:02.615233   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:20:02.616200   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:20:02.616206   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:20:02.655977   11892 logs.go:123] Gathering logs for kube-apiserver [75a802f96205] ...
	I0507 11:20:02.655989   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a802f96205"
	I0507 11:20:02.671030   11892 logs.go:123] Gathering logs for etcd [1fd8756a7c65] ...
	I0507 11:20:02.671041   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fd8756a7c65"
	I0507 11:20:02.684625   11892 logs.go:123] Gathering logs for kube-scheduler [5bdac9fb6ee0] ...
	I0507 11:20:02.684637   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bdac9fb6ee0"
	I0507 11:20:02.699517   11892 logs.go:123] Gathering logs for storage-provisioner [fd4f39baf1d4] ...
	I0507 11:20:02.699530   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd4f39baf1d4"
	I0507 11:20:02.711202   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:20:02.711217   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:20:02.734186   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:20:02.734195   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:20:02.745602   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:20:02.745615   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0507 11:20:02.745639   11892 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0507 11:20:02.745645   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:20:02.745648   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:20:02.745652   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:20:02.745654   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:20:12.749615   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:20:17.750954   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:20:17.751409   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:20:17.787926   11892 logs.go:276] 1 containers: [75a802f96205]
	I0507 11:20:17.788050   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:20:17.813227   11892 logs.go:276] 1 containers: [1fd8756a7c65]
	I0507 11:20:17.813334   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:20:17.827373   11892 logs.go:276] 4 containers: [cb1a94a3bacd bc06b25c228b d03b1e6e0398 85d957baa1b3]
	I0507 11:20:17.827454   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:20:17.838880   11892 logs.go:276] 1 containers: [5bdac9fb6ee0]
	I0507 11:20:17.838942   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:20:17.849292   11892 logs.go:276] 1 containers: [749c2128c4af]
	I0507 11:20:17.849360   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:20:17.859447   11892 logs.go:276] 1 containers: [6cef98e1a7a8]
	I0507 11:20:17.859517   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:20:17.869669   11892 logs.go:276] 0 containers: []
	W0507 11:20:17.869683   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:20:17.869742   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:20:17.881716   11892 logs.go:276] 1 containers: [fd4f39baf1d4]
	I0507 11:20:17.881732   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:20:17.881737   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:20:17.915418   11892 logs.go:123] Gathering logs for coredns [cb1a94a3bacd] ...
	I0507 11:20:17.915431   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb1a94a3bacd"
	I0507 11:20:17.927281   11892 logs.go:123] Gathering logs for kube-controller-manager [6cef98e1a7a8] ...
	I0507 11:20:17.927292   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef98e1a7a8"
	I0507 11:20:17.946014   11892 logs.go:123] Gathering logs for storage-provisioner [fd4f39baf1d4] ...
	I0507 11:20:17.946025   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd4f39baf1d4"
	I0507 11:20:17.957844   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:20:17.957857   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0507 11:20:17.990175   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:20:17.990266   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:20:17.991459   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:20:17.991463   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:20:17.995665   11892 logs.go:123] Gathering logs for kube-apiserver [75a802f96205] ...
	I0507 11:20:17.995673   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a802f96205"
	I0507 11:20:18.010324   11892 logs.go:123] Gathering logs for etcd [1fd8756a7c65] ...
	I0507 11:20:18.010333   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fd8756a7c65"
	I0507 11:20:18.024053   11892 logs.go:123] Gathering logs for coredns [bc06b25c228b] ...
	I0507 11:20:18.024064   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc06b25c228b"
	I0507 11:20:18.035479   11892 logs.go:123] Gathering logs for coredns [85d957baa1b3] ...
	I0507 11:20:18.035489   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85d957baa1b3"
	I0507 11:20:18.047391   11892 logs.go:123] Gathering logs for coredns [d03b1e6e0398] ...
	I0507 11:20:18.047405   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03b1e6e0398"
	I0507 11:20:18.061396   11892 logs.go:123] Gathering logs for kube-scheduler [5bdac9fb6ee0] ...
	I0507 11:20:18.061410   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bdac9fb6ee0"
	I0507 11:20:18.076334   11892 logs.go:123] Gathering logs for kube-proxy [749c2128c4af] ...
	I0507 11:20:18.076347   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749c2128c4af"
	I0507 11:20:18.088077   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:20:18.088089   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:20:18.112573   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:20:18.112583   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:20:18.124232   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:20:18.124246   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0507 11:20:18.124272   11892 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0507 11:20:18.124277   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:20:18.124288   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:20:18.124295   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:20:18.124297   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:20:28.128200   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:20:33.130898   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:20:33.131307   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:20:33.167148   11892 logs.go:276] 1 containers: [75a802f96205]
	I0507 11:20:33.167282   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:20:33.186782   11892 logs.go:276] 1 containers: [1fd8756a7c65]
	I0507 11:20:33.186880   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:20:33.201865   11892 logs.go:276] 4 containers: [cb1a94a3bacd bc06b25c228b d03b1e6e0398 85d957baa1b3]
	I0507 11:20:33.201940   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:20:33.213818   11892 logs.go:276] 1 containers: [5bdac9fb6ee0]
	I0507 11:20:33.213887   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:20:33.224448   11892 logs.go:276] 1 containers: [749c2128c4af]
	I0507 11:20:33.224518   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:20:33.234779   11892 logs.go:276] 1 containers: [6cef98e1a7a8]
	I0507 11:20:33.234846   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:20:33.245276   11892 logs.go:276] 0 containers: []
	W0507 11:20:33.245292   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:20:33.245346   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:20:33.255790   11892 logs.go:276] 1 containers: [fd4f39baf1d4]
	I0507 11:20:33.255805   11892 logs.go:123] Gathering logs for coredns [cb1a94a3bacd] ...
	I0507 11:20:33.255810   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb1a94a3bacd"
	I0507 11:20:33.267193   11892 logs.go:123] Gathering logs for coredns [bc06b25c228b] ...
	I0507 11:20:33.267206   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc06b25c228b"
	I0507 11:20:33.281953   11892 logs.go:123] Gathering logs for kube-scheduler [5bdac9fb6ee0] ...
	I0507 11:20:33.281965   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bdac9fb6ee0"
	I0507 11:20:33.297380   11892 logs.go:123] Gathering logs for kube-controller-manager [6cef98e1a7a8] ...
	I0507 11:20:33.297392   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef98e1a7a8"
	I0507 11:20:33.314984   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:20:33.314994   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:20:33.339727   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:20:33.339733   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:20:33.351394   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:20:33.351409   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0507 11:20:33.383830   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:20:33.383920   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:20:33.385104   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:20:33.385107   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:20:33.420059   11892 logs.go:123] Gathering logs for etcd [1fd8756a7c65] ...
	I0507 11:20:33.420071   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fd8756a7c65"
	I0507 11:20:33.433897   11892 logs.go:123] Gathering logs for coredns [85d957baa1b3] ...
	I0507 11:20:33.433908   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85d957baa1b3"
	I0507 11:20:33.454139   11892 logs.go:123] Gathering logs for kube-proxy [749c2128c4af] ...
	I0507 11:20:33.454153   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749c2128c4af"
	I0507 11:20:33.466023   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:20:33.466036   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:20:33.470226   11892 logs.go:123] Gathering logs for kube-apiserver [75a802f96205] ...
	I0507 11:20:33.470235   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a802f96205"
	I0507 11:20:33.484702   11892 logs.go:123] Gathering logs for coredns [d03b1e6e0398] ...
	I0507 11:20:33.484715   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03b1e6e0398"
	I0507 11:20:33.496440   11892 logs.go:123] Gathering logs for storage-provisioner [fd4f39baf1d4] ...
	I0507 11:20:33.496449   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd4f39baf1d4"
	I0507 11:20:33.508438   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:20:33.508447   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0507 11:20:33.508474   11892 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0507 11:20:33.508480   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:20:33.508484   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:20:33.508526   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:20:33.508551   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:20:43.512492   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:20:48.515106   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:20:48.515365   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:20:48.543687   11892 logs.go:276] 1 containers: [75a802f96205]
	I0507 11:20:48.543762   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:20:48.555772   11892 logs.go:276] 1 containers: [1fd8756a7c65]
	I0507 11:20:48.555846   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:20:48.567590   11892 logs.go:276] 4 containers: [cb1a94a3bacd bc06b25c228b d03b1e6e0398 85d957baa1b3]
	I0507 11:20:48.567668   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:20:48.581409   11892 logs.go:276] 1 containers: [5bdac9fb6ee0]
	I0507 11:20:48.581480   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:20:48.593373   11892 logs.go:276] 1 containers: [749c2128c4af]
	I0507 11:20:48.593451   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:20:48.604902   11892 logs.go:276] 1 containers: [6cef98e1a7a8]
	I0507 11:20:48.604980   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:20:48.615795   11892 logs.go:276] 0 containers: []
	W0507 11:20:48.615809   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:20:48.615888   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:20:48.627592   11892 logs.go:276] 1 containers: [fd4f39baf1d4]
	I0507 11:20:48.627608   11892 logs.go:123] Gathering logs for kube-apiserver [75a802f96205] ...
	I0507 11:20:48.627613   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a802f96205"
	I0507 11:20:48.646865   11892 logs.go:123] Gathering logs for etcd [1fd8756a7c65] ...
	I0507 11:20:48.646880   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fd8756a7c65"
	I0507 11:20:48.662000   11892 logs.go:123] Gathering logs for coredns [85d957baa1b3] ...
	I0507 11:20:48.662013   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85d957baa1b3"
	I0507 11:20:48.674404   11892 logs.go:123] Gathering logs for storage-provisioner [fd4f39baf1d4] ...
	I0507 11:20:48.674414   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd4f39baf1d4"
	I0507 11:20:48.687260   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:20:48.687271   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:20:48.712456   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:20:48.712482   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:20:48.748632   11892 logs.go:123] Gathering logs for kube-scheduler [5bdac9fb6ee0] ...
	I0507 11:20:48.748643   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bdac9fb6ee0"
	I0507 11:20:48.764178   11892 logs.go:123] Gathering logs for kube-controller-manager [6cef98e1a7a8] ...
	I0507 11:20:48.764188   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef98e1a7a8"
	I0507 11:20:48.783080   11892 logs.go:123] Gathering logs for coredns [bc06b25c228b] ...
	I0507 11:20:48.783089   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc06b25c228b"
	I0507 11:20:48.795308   11892 logs.go:123] Gathering logs for kube-proxy [749c2128c4af] ...
	I0507 11:20:48.795319   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749c2128c4af"
	I0507 11:20:48.808023   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:20:48.808035   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:20:48.820176   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:20:48.820187   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0507 11:20:48.853094   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:20:48.853191   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:20:48.854423   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:20:48.854430   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:20:48.858996   11892 logs.go:123] Gathering logs for coredns [cb1a94a3bacd] ...
	I0507 11:20:48.859002   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb1a94a3bacd"
	I0507 11:20:48.873667   11892 logs.go:123] Gathering logs for coredns [d03b1e6e0398] ...
	I0507 11:20:48.873678   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03b1e6e0398"
	I0507 11:20:48.886374   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:20:48.886384   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0507 11:20:48.886410   11892 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0507 11:20:48.886414   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:20:48.886418   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:20:48.886430   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:20:48.886432   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:20:58.890283   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:21:03.892381   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:21:03.892799   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:21:03.927544   11892 logs.go:276] 1 containers: [75a802f96205]
	I0507 11:21:03.927667   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:21:03.947927   11892 logs.go:276] 1 containers: [1fd8756a7c65]
	I0507 11:21:03.948024   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:21:03.973162   11892 logs.go:276] 4 containers: [cb1a94a3bacd bc06b25c228b d03b1e6e0398 85d957baa1b3]
	I0507 11:21:03.973232   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:21:03.990999   11892 logs.go:276] 1 containers: [5bdac9fb6ee0]
	I0507 11:21:03.991061   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:21:04.005693   11892 logs.go:276] 1 containers: [749c2128c4af]
	I0507 11:21:04.005762   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:21:04.015951   11892 logs.go:276] 1 containers: [6cef98e1a7a8]
	I0507 11:21:04.016012   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:21:04.025883   11892 logs.go:276] 0 containers: []
	W0507 11:21:04.025894   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:21:04.025951   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:21:04.036367   11892 logs.go:276] 1 containers: [fd4f39baf1d4]
	I0507 11:21:04.036384   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:21:04.036389   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0507 11:21:04.069879   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:21:04.069977   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:21:04.071182   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:21:04.071190   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:21:04.095221   11892 logs.go:123] Gathering logs for coredns [cb1a94a3bacd] ...
	I0507 11:21:04.095229   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb1a94a3bacd"
	I0507 11:21:04.113326   11892 logs.go:123] Gathering logs for kube-controller-manager [6cef98e1a7a8] ...
	I0507 11:21:04.113338   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef98e1a7a8"
	I0507 11:21:04.131381   11892 logs.go:123] Gathering logs for storage-provisioner [fd4f39baf1d4] ...
	I0507 11:21:04.131394   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd4f39baf1d4"
	I0507 11:21:04.142602   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:21:04.142612   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:21:04.154861   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:21:04.154873   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:21:04.159061   11892 logs.go:123] Gathering logs for coredns [d03b1e6e0398] ...
	I0507 11:21:04.159067   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03b1e6e0398"
	I0507 11:21:04.170406   11892 logs.go:123] Gathering logs for coredns [85d957baa1b3] ...
	I0507 11:21:04.170417   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85d957baa1b3"
	I0507 11:21:04.183382   11892 logs.go:123] Gathering logs for kube-scheduler [5bdac9fb6ee0] ...
	I0507 11:21:04.183392   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bdac9fb6ee0"
	I0507 11:21:04.198116   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:21:04.198127   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:21:04.244328   11892 logs.go:123] Gathering logs for kube-apiserver [75a802f96205] ...
	I0507 11:21:04.244341   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a802f96205"
	I0507 11:21:04.260095   11892 logs.go:123] Gathering logs for etcd [1fd8756a7c65] ...
	I0507 11:21:04.260109   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fd8756a7c65"
	I0507 11:21:04.273932   11892 logs.go:123] Gathering logs for coredns [bc06b25c228b] ...
	I0507 11:21:04.273945   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc06b25c228b"
	I0507 11:21:04.285040   11892 logs.go:123] Gathering logs for kube-proxy [749c2128c4af] ...
	I0507 11:21:04.285053   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749c2128c4af"
	I0507 11:21:04.298021   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:21:04.298034   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0507 11:21:04.298062   11892 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0507 11:21:04.298066   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:21:04.298069   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:21:04.298095   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:21:04.298099   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:21:14.300553   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:21:19.302650   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:21:19.302834   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:21:19.321596   11892 logs.go:276] 1 containers: [75a802f96205]
	I0507 11:21:19.321670   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:21:19.332545   11892 logs.go:276] 1 containers: [1fd8756a7c65]
	I0507 11:21:19.332619   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:21:19.343049   11892 logs.go:276] 4 containers: [cb1a94a3bacd bc06b25c228b d03b1e6e0398 85d957baa1b3]
	I0507 11:21:19.343122   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:21:19.353655   11892 logs.go:276] 1 containers: [5bdac9fb6ee0]
	I0507 11:21:19.353719   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:21:19.364352   11892 logs.go:276] 1 containers: [749c2128c4af]
	I0507 11:21:19.364423   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:21:19.374669   11892 logs.go:276] 1 containers: [6cef98e1a7a8]
	I0507 11:21:19.374751   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:21:19.384363   11892 logs.go:276] 0 containers: []
	W0507 11:21:19.384378   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:21:19.384436   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:21:19.394691   11892 logs.go:276] 1 containers: [fd4f39baf1d4]
	I0507 11:21:19.394708   11892 logs.go:123] Gathering logs for storage-provisioner [fd4f39baf1d4] ...
	I0507 11:21:19.394714   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd4f39baf1d4"
	I0507 11:21:19.405836   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:21:19.405845   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:21:19.430377   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:21:19.430386   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:21:19.466541   11892 logs.go:123] Gathering logs for kube-apiserver [75a802f96205] ...
	I0507 11:21:19.466554   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a802f96205"
	I0507 11:21:19.480457   11892 logs.go:123] Gathering logs for coredns [cb1a94a3bacd] ...
	I0507 11:21:19.480467   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb1a94a3bacd"
	I0507 11:21:19.493307   11892 logs.go:123] Gathering logs for coredns [85d957baa1b3] ...
	I0507 11:21:19.493317   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85d957baa1b3"
	I0507 11:21:19.504583   11892 logs.go:123] Gathering logs for coredns [bc06b25c228b] ...
	I0507 11:21:19.504593   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc06b25c228b"
	I0507 11:21:19.515667   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:21:19.515681   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:21:19.520300   11892 logs.go:123] Gathering logs for kube-scheduler [5bdac9fb6ee0] ...
	I0507 11:21:19.520309   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bdac9fb6ee0"
	I0507 11:21:19.535528   11892 logs.go:123] Gathering logs for kube-controller-manager [6cef98e1a7a8] ...
	I0507 11:21:19.535539   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef98e1a7a8"
	I0507 11:21:19.553611   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:21:19.553623   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:21:19.565041   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:21:19.565051   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0507 11:21:19.596324   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:21:19.596415   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:21:19.597612   11892 logs.go:123] Gathering logs for etcd [1fd8756a7c65] ...
	I0507 11:21:19.597617   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fd8756a7c65"
	I0507 11:21:19.611360   11892 logs.go:123] Gathering logs for coredns [d03b1e6e0398] ...
	I0507 11:21:19.611369   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03b1e6e0398"
	I0507 11:21:19.622752   11892 logs.go:123] Gathering logs for kube-proxy [749c2128c4af] ...
	I0507 11:21:19.622765   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749c2128c4af"
	I0507 11:21:19.633971   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:21:19.633983   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0507 11:21:19.634010   11892 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0507 11:21:19.634014   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:21:19.634017   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:21:19.634022   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:21:19.634025   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:21:29.637963   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:21:34.640699   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:21:34.641133   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:21:34.679563   11892 logs.go:276] 1 containers: [75a802f96205]
	I0507 11:21:34.679682   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:21:34.701420   11892 logs.go:276] 1 containers: [1fd8756a7c65]
	I0507 11:21:34.701496   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:21:34.714320   11892 logs.go:276] 4 containers: [cb1a94a3bacd bc06b25c228b d03b1e6e0398 85d957baa1b3]
	I0507 11:21:34.714395   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:21:34.725550   11892 logs.go:276] 1 containers: [5bdac9fb6ee0]
	I0507 11:21:34.725620   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:21:34.738089   11892 logs.go:276] 1 containers: [749c2128c4af]
	I0507 11:21:34.738162   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:21:34.748439   11892 logs.go:276] 1 containers: [6cef98e1a7a8]
	I0507 11:21:34.748503   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:21:34.759377   11892 logs.go:276] 0 containers: []
	W0507 11:21:34.759395   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:21:34.759451   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:21:34.769942   11892 logs.go:276] 1 containers: [fd4f39baf1d4]
	I0507 11:21:34.769961   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:21:34.769967   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:21:34.804034   11892 logs.go:123] Gathering logs for kube-apiserver [75a802f96205] ...
	I0507 11:21:34.804044   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a802f96205"
	I0507 11:21:34.830641   11892 logs.go:123] Gathering logs for etcd [1fd8756a7c65] ...
	I0507 11:21:34.830650   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fd8756a7c65"
	I0507 11:21:34.846976   11892 logs.go:123] Gathering logs for storage-provisioner [fd4f39baf1d4] ...
	I0507 11:21:34.846992   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd4f39baf1d4"
	I0507 11:21:34.860696   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:21:34.860708   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:21:34.885998   11892 logs.go:123] Gathering logs for coredns [cb1a94a3bacd] ...
	I0507 11:21:34.886021   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb1a94a3bacd"
	I0507 11:21:34.899656   11892 logs.go:123] Gathering logs for coredns [d03b1e6e0398] ...
	I0507 11:21:34.899667   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03b1e6e0398"
	I0507 11:21:34.913245   11892 logs.go:123] Gathering logs for coredns [85d957baa1b3] ...
	I0507 11:21:34.913258   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85d957baa1b3"
	I0507 11:21:34.926469   11892 logs.go:123] Gathering logs for kube-scheduler [5bdac9fb6ee0] ...
	I0507 11:21:34.926483   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bdac9fb6ee0"
	I0507 11:21:34.943693   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:21:34.943708   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0507 11:21:34.978389   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:21:34.978482   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:21:34.979709   11892 logs.go:123] Gathering logs for kube-proxy [749c2128c4af] ...
	I0507 11:21:34.979716   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749c2128c4af"
	I0507 11:21:34.991691   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:21:34.991703   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:21:35.004134   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:21:35.004145   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:21:35.008447   11892 logs.go:123] Gathering logs for coredns [bc06b25c228b] ...
	I0507 11:21:35.008455   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc06b25c228b"
	I0507 11:21:35.020300   11892 logs.go:123] Gathering logs for kube-controller-manager [6cef98e1a7a8] ...
	I0507 11:21:35.020313   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef98e1a7a8"
	I0507 11:21:35.038869   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:21:35.038879   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0507 11:21:35.038902   11892 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0507 11:21:35.038906   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:21:35.038909   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:21:35.038913   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:21:35.038927   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:21:45.042820   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:21:50.043501   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:21:50.043571   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 11:21:50.056782   11892 logs.go:276] 1 containers: [75a802f96205]
	I0507 11:21:50.056837   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 11:21:50.067675   11892 logs.go:276] 1 containers: [1fd8756a7c65]
	I0507 11:21:50.067728   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 11:21:50.078475   11892 logs.go:276] 4 containers: [cb1a94a3bacd bc06b25c228b d03b1e6e0398 85d957baa1b3]
	I0507 11:21:50.078545   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 11:21:50.090009   11892 logs.go:276] 1 containers: [5bdac9fb6ee0]
	I0507 11:21:50.090074   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 11:21:50.102331   11892 logs.go:276] 1 containers: [749c2128c4af]
	I0507 11:21:50.102385   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 11:21:50.113089   11892 logs.go:276] 1 containers: [6cef98e1a7a8]
	I0507 11:21:50.113145   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 11:21:50.123578   11892 logs.go:276] 0 containers: []
	W0507 11:21:50.123590   11892 logs.go:278] No container was found matching "kindnet"
	I0507 11:21:50.123647   11892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0507 11:21:50.135030   11892 logs.go:276] 1 containers: [fd4f39baf1d4]
	I0507 11:21:50.135049   11892 logs.go:123] Gathering logs for coredns [cb1a94a3bacd] ...
	I0507 11:21:50.135054   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb1a94a3bacd"
	I0507 11:21:50.147544   11892 logs.go:123] Gathering logs for coredns [bc06b25c228b] ...
	I0507 11:21:50.147556   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc06b25c228b"
	I0507 11:21:50.159911   11892 logs.go:123] Gathering logs for coredns [d03b1e6e0398] ...
	I0507 11:21:50.159924   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03b1e6e0398"
	I0507 11:21:50.173030   11892 logs.go:123] Gathering logs for kube-scheduler [5bdac9fb6ee0] ...
	I0507 11:21:50.173041   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bdac9fb6ee0"
	I0507 11:21:50.189444   11892 logs.go:123] Gathering logs for container status ...
	I0507 11:21:50.189457   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 11:21:50.202961   11892 logs.go:123] Gathering logs for dmesg ...
	I0507 11:21:50.202974   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 11:21:50.207954   11892 logs.go:123] Gathering logs for describe nodes ...
	I0507 11:21:50.207963   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 11:21:50.245504   11892 logs.go:123] Gathering logs for etcd [1fd8756a7c65] ...
	I0507 11:21:50.245514   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fd8756a7c65"
	I0507 11:21:50.260573   11892 logs.go:123] Gathering logs for kube-controller-manager [6cef98e1a7a8] ...
	I0507 11:21:50.260587   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef98e1a7a8"
	I0507 11:21:50.319087   11892 logs.go:123] Gathering logs for kubelet ...
	I0507 11:21:50.319105   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0507 11:21:50.354239   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:21:50.354341   11892 logs.go:138] Found kubelet problem: May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:21:50.355595   11892 logs.go:123] Gathering logs for kube-apiserver [75a802f96205] ...
	I0507 11:21:50.355602   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75a802f96205"
	I0507 11:21:50.371440   11892 logs.go:123] Gathering logs for kube-proxy [749c2128c4af] ...
	I0507 11:21:50.371449   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749c2128c4af"
	I0507 11:21:50.383401   11892 logs.go:123] Gathering logs for coredns [85d957baa1b3] ...
	I0507 11:21:50.383413   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85d957baa1b3"
	I0507 11:21:50.397155   11892 logs.go:123] Gathering logs for Docker ...
	I0507 11:21:50.397170   11892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 11:21:50.423136   11892 logs.go:123] Gathering logs for storage-provisioner [fd4f39baf1d4] ...
	I0507 11:21:50.423144   11892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd4f39baf1d4"
	I0507 11:21:50.435488   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:21:50.435500   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0507 11:21:50.435527   11892 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0507 11:21:50.435531   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: W0507 18:18:14.518637   10464 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	W0507 11:21:50.435535   11892 out.go:239]   May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	  May 07 18:18:14 stopped-upgrade-069000 kubelet[10464]: E0507 18:18:14.518649   10464 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-069000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-069000' and this object
	I0507 11:21:50.435539   11892 out.go:304] Setting ErrFile to fd 2...
	I0507 11:21:50.435550   11892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:00.439660   11892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0507 11:22:05.441482   11892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0507 11:22:05.447453   11892 out.go:177] 
	W0507 11:22:05.453771   11892 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0507 11:22:05.453813   11892 out.go:239] * 
	* 
	W0507 11:22:05.456028   11892 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:22:05.466567   11892 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-069000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (579.19s)

                                                
                                    
x
+
TestPause/serial/Start (9.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-259000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-259000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.892626375s)

                                                
                                                
-- stdout --
	* [pause-259000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-259000" primary control-plane node in "pause-259000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-259000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-259000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-259000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-259000 -n pause-259000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-259000 -n pause-259000: exit status 7 (36.858167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-259000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-274000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-274000 --driver=qemu2 : exit status 80 (9.7654255s)

                                                
                                                
-- stdout --
	* [NoKubernetes-274000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-274000" primary control-plane node in "NoKubernetes-274000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-274000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-274000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-274000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-274000 -n NoKubernetes-274000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-274000 -n NoKubernetes-274000: exit status 7 (30.7765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-274000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-274000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-274000 --no-kubernetes --driver=qemu2 : exit status 80 (5.239617125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-274000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-274000
	* Restarting existing qemu2 VM for "NoKubernetes-274000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-274000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-274000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-274000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-274000 -n NoKubernetes-274000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-274000 -n NoKubernetes-274000: exit status 7 (29.50125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-274000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-274000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-274000 --no-kubernetes --driver=qemu2 : exit status 80 (5.235228541s)

                                                
                                                
-- stdout --
	* [NoKubernetes-274000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-274000
	* Restarting existing qemu2 VM for "NoKubernetes-274000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-274000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-274000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-274000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-274000 -n NoKubernetes-274000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-274000 -n NoKubernetes-274000: exit status 7 (53.352834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-274000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-274000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-274000 --driver=qemu2 : exit status 80 (5.281759209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-274000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-274000
	* Restarting existing qemu2 VM for "NoKubernetes-274000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-274000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-274000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-274000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-274000 -n NoKubernetes-274000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-274000 -n NoKubernetes-274000: exit status 7 (52.551917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-274000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.080492291s)

                                                
                                                
-- stdout --
	* [auto-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-359000" primary control-plane node in "auto-359000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-359000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:20:14.674087   12205 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:20:14.674216   12205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:20:14.674220   12205 out.go:304] Setting ErrFile to fd 2...
	I0507 11:20:14.674222   12205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:20:14.674356   12205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:20:14.675419   12205 out.go:298] Setting JSON to false
	I0507 11:20:14.691964   12205 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6585,"bootTime":1715099429,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:20:14.692023   12205 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:20:14.696699   12205 out.go:177] * [auto-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:20:14.708639   12205 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:20:14.704775   12205 notify.go:220] Checking for updates...
	I0507 11:20:14.714692   12205 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:20:14.717650   12205 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:20:14.724662   12205 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:20:14.727638   12205 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:20:14.730648   12205 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:20:14.734010   12205 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:20:14.734086   12205 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:20:14.734125   12205 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:20:14.737625   12205 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:20:14.744736   12205 start.go:297] selected driver: qemu2
	I0507 11:20:14.744743   12205 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:20:14.744753   12205 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:20:14.747093   12205 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:20:14.749714   12205 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:20:14.752704   12205 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:20:14.752721   12205 cni.go:84] Creating CNI manager for ""
	I0507 11:20:14.752727   12205 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:20:14.752730   12205 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 11:20:14.752755   12205 start.go:340] cluster config:
	{Name:auto-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:20:14.757151   12205 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:20:14.764603   12205 out.go:177] * Starting "auto-359000" primary control-plane node in "auto-359000" cluster
	I0507 11:20:14.768731   12205 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:20:14.768748   12205 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:20:14.768755   12205 cache.go:56] Caching tarball of preloaded images
	I0507 11:20:14.768819   12205 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:20:14.768825   12205 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:20:14.768895   12205 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/auto-359000/config.json ...
	I0507 11:20:14.768908   12205 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/auto-359000/config.json: {Name:mk4399087cb44bee3fab8c9b663a1bfb06197b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:20:14.769327   12205 start.go:360] acquireMachinesLock for auto-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:20:14.769362   12205 start.go:364] duration metric: took 29.166µs to acquireMachinesLock for "auto-359000"
	I0507 11:20:14.769373   12205 start.go:93] Provisioning new machine with config: &{Name:auto-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:auto-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:20:14.769409   12205 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:20:14.777641   12205 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:20:14.793885   12205 start.go:159] libmachine.API.Create for "auto-359000" (driver="qemu2")
	I0507 11:20:14.793917   12205 client.go:168] LocalClient.Create starting
	I0507 11:20:14.793987   12205 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:20:14.794018   12205 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:14.794026   12205 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:14.794065   12205 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:20:14.794088   12205 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:14.794098   12205 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:14.794486   12205 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:20:14.932742   12205 main.go:141] libmachine: Creating SSH key...
	I0507 11:20:15.317217   12205 main.go:141] libmachine: Creating Disk image...
	I0507 11:20:15.317231   12205 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:20:15.317413   12205 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/disk.qcow2
	I0507 11:20:15.330375   12205 main.go:141] libmachine: STDOUT: 
	I0507 11:20:15.330403   12205 main.go:141] libmachine: STDERR: 
	I0507 11:20:15.330478   12205 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/disk.qcow2 +20000M
	I0507 11:20:15.341959   12205 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:20:15.341977   12205 main.go:141] libmachine: STDERR: 
	I0507 11:20:15.342000   12205 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/disk.qcow2
	I0507 11:20:15.342005   12205 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:20:15.342042   12205 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:37:8f:64:1e:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/disk.qcow2
	I0507 11:20:15.343876   12205 main.go:141] libmachine: STDOUT: 
	I0507 11:20:15.343891   12205 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:20:15.343910   12205 client.go:171] duration metric: took 550.00325ms to LocalClient.Create
	I0507 11:20:17.346059   12205 start.go:128] duration metric: took 2.576695875s to createHost
	I0507 11:20:17.346134   12205 start.go:83] releasing machines lock for "auto-359000", held for 2.576836417s
	W0507 11:20:17.346256   12205 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:20:17.361472   12205 out.go:177] * Deleting "auto-359000" in qemu2 ...
	W0507 11:20:17.390998   12205 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:20:17.391035   12205 start.go:728] Will try again in 5 seconds ...
	I0507 11:20:22.393211   12205 start.go:360] acquireMachinesLock for auto-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:20:22.393832   12205 start.go:364] duration metric: took 478.917µs to acquireMachinesLock for "auto-359000"
	I0507 11:20:22.394010   12205 start.go:93] Provisioning new machine with config: &{Name:auto-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:auto-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:20:22.394333   12205 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:20:22.402990   12205 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:20:22.453375   12205 start.go:159] libmachine.API.Create for "auto-359000" (driver="qemu2")
	I0507 11:20:22.453425   12205 client.go:168] LocalClient.Create starting
	I0507 11:20:22.453553   12205 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:20:22.453621   12205 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:22.453642   12205 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:22.453711   12205 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:20:22.453755   12205 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:22.453765   12205 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:22.454324   12205 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:20:22.603159   12205 main.go:141] libmachine: Creating SSH key...
	I0507 11:20:22.663343   12205 main.go:141] libmachine: Creating Disk image...
	I0507 11:20:22.663357   12205 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:20:22.663559   12205 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/disk.qcow2
	I0507 11:20:22.676240   12205 main.go:141] libmachine: STDOUT: 
	I0507 11:20:22.676264   12205 main.go:141] libmachine: STDERR: 
	I0507 11:20:22.676341   12205 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/disk.qcow2 +20000M
	I0507 11:20:22.687317   12205 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:20:22.687341   12205 main.go:141] libmachine: STDERR: 
	I0507 11:20:22.687358   12205 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/disk.qcow2
	I0507 11:20:22.687363   12205 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:20:22.687409   12205 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:f6:71:29:6a:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/auto-359000/disk.qcow2
	I0507 11:20:22.689295   12205 main.go:141] libmachine: STDOUT: 
	I0507 11:20:22.689315   12205 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:20:22.689330   12205 client.go:171] duration metric: took 235.904583ms to LocalClient.Create
	I0507 11:20:24.691409   12205 start.go:128] duration metric: took 2.297101958s to createHost
	I0507 11:20:24.691439   12205 start.go:83] releasing machines lock for "auto-359000", held for 2.29765275s
	W0507 11:20:24.691570   12205 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:20:24.700727   12205 out.go:177] 
	W0507 11:20:24.704771   12205 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:20:24.704785   12205 out.go:239] * 
	* 
	W0507 11:20:24.705593   12205 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:20:24.717761   12205 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.861697291s)

                                                
                                                
-- stdout --
	* [calico-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-359000" primary control-plane node in "calico-359000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-359000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:20:26.916369   12317 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:20:26.916498   12317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:20:26.916501   12317 out.go:304] Setting ErrFile to fd 2...
	I0507 11:20:26.916504   12317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:20:26.916647   12317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:20:26.917702   12317 out.go:298] Setting JSON to false
	I0507 11:20:26.934039   12317 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6597,"bootTime":1715099429,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:20:26.934100   12317 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:20:26.940247   12317 out.go:177] * [calico-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:20:26.948111   12317 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:20:26.952102   12317 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:20:26.948202   12317 notify.go:220] Checking for updates...
	I0507 11:20:26.955027   12317 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:20:26.958016   12317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:20:26.961040   12317 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:20:26.962578   12317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:20:26.966400   12317 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:20:26.966465   12317 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:20:26.966528   12317 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:20:26.971028   12317 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:20:26.977060   12317 start.go:297] selected driver: qemu2
	I0507 11:20:26.977065   12317 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:20:26.977070   12317 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:20:26.979136   12317 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:20:26.982047   12317 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:20:26.985091   12317 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:20:26.985109   12317 cni.go:84] Creating CNI manager for "calico"
	I0507 11:20:26.985113   12317 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0507 11:20:26.985140   12317 start.go:340] cluster config:
	{Name:calico-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:20:26.989600   12317 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:20:26.998076   12317 out.go:177] * Starting "calico-359000" primary control-plane node in "calico-359000" cluster
	I0507 11:20:27.002007   12317 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:20:27.002025   12317 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:20:27.002033   12317 cache.go:56] Caching tarball of preloaded images
	I0507 11:20:27.002095   12317 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:20:27.002100   12317 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:20:27.002173   12317 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/calico-359000/config.json ...
	I0507 11:20:27.002185   12317 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/calico-359000/config.json: {Name:mkda2cace486ba530ee918af522071933296aaa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:20:27.002402   12317 start.go:360] acquireMachinesLock for calico-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:20:27.002437   12317 start.go:364] duration metric: took 29.917µs to acquireMachinesLock for "calico-359000"
	I0507 11:20:27.002449   12317 start.go:93] Provisioning new machine with config: &{Name:calico-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:calico-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:20:27.002481   12317 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:20:27.010968   12317 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:20:27.027346   12317 start.go:159] libmachine.API.Create for "calico-359000" (driver="qemu2")
	I0507 11:20:27.027368   12317 client.go:168] LocalClient.Create starting
	I0507 11:20:27.027424   12317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:20:27.027453   12317 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:27.027463   12317 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:27.027496   12317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:20:27.027519   12317 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:27.027526   12317 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:27.027875   12317 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:20:27.163698   12317 main.go:141] libmachine: Creating SSH key...
	I0507 11:20:27.232728   12317 main.go:141] libmachine: Creating Disk image...
	I0507 11:20:27.232734   12317 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:20:27.232913   12317 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/disk.qcow2
	I0507 11:20:27.245172   12317 main.go:141] libmachine: STDOUT: 
	I0507 11:20:27.245193   12317 main.go:141] libmachine: STDERR: 
	I0507 11:20:27.245271   12317 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/disk.qcow2 +20000M
	I0507 11:20:27.256500   12317 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:20:27.256528   12317 main.go:141] libmachine: STDERR: 
	I0507 11:20:27.256541   12317 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/disk.qcow2
	I0507 11:20:27.256546   12317 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:20:27.256575   12317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:c4:4d:48:d4:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/disk.qcow2
	I0507 11:20:27.258315   12317 main.go:141] libmachine: STDOUT: 
	I0507 11:20:27.258330   12317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:20:27.258349   12317 client.go:171] duration metric: took 230.982875ms to LocalClient.Create
	I0507 11:20:29.260617   12317 start.go:128] duration metric: took 2.258173208s to createHost
	I0507 11:20:29.260692   12317 start.go:83] releasing machines lock for "calico-359000", held for 2.258309791s
	W0507 11:20:29.260758   12317 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:20:29.271138   12317 out.go:177] * Deleting "calico-359000" in qemu2 ...
	W0507 11:20:29.301250   12317 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:20:29.301288   12317 start.go:728] Will try again in 5 seconds ...
	I0507 11:20:34.301495   12317 start.go:360] acquireMachinesLock for calico-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:20:34.301962   12317 start.go:364] duration metric: took 379.875µs to acquireMachinesLock for "calico-359000"
	I0507 11:20:34.302013   12317 start.go:93] Provisioning new machine with config: &{Name:calico-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:calico-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:20:34.302299   12317 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:20:34.309949   12317 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:20:34.348432   12317 start.go:159] libmachine.API.Create for "calico-359000" (driver="qemu2")
	I0507 11:20:34.348499   12317 client.go:168] LocalClient.Create starting
	I0507 11:20:34.348649   12317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:20:34.348717   12317 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:34.348733   12317 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:34.348788   12317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:20:34.348828   12317 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:34.348841   12317 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:34.349310   12317 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:20:34.494470   12317 main.go:141] libmachine: Creating SSH key...
	I0507 11:20:34.680746   12317 main.go:141] libmachine: Creating Disk image...
	I0507 11:20:34.680756   12317 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:20:34.681241   12317 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/disk.qcow2
	I0507 11:20:34.694220   12317 main.go:141] libmachine: STDOUT: 
	I0507 11:20:34.694247   12317 main.go:141] libmachine: STDERR: 
	I0507 11:20:34.694308   12317 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/disk.qcow2 +20000M
	I0507 11:20:34.705567   12317 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:20:34.705584   12317 main.go:141] libmachine: STDERR: 
	I0507 11:20:34.705599   12317 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/disk.qcow2
	I0507 11:20:34.705603   12317 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:20:34.705650   12317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:a9:44:1d:eb:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/calico-359000/disk.qcow2
	I0507 11:20:34.707427   12317 main.go:141] libmachine: STDOUT: 
	I0507 11:20:34.707442   12317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:20:34.707455   12317 client.go:171] duration metric: took 358.950917ms to LocalClient.Create
	I0507 11:20:36.709606   12317 start.go:128] duration metric: took 2.407333583s to createHost
	I0507 11:20:36.709762   12317 start.go:83] releasing machines lock for "calico-359000", held for 2.4077655s
	W0507 11:20:36.710122   12317 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:20:36.719528   12317 out.go:177] 
	W0507 11:20:36.725787   12317 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:20:36.725827   12317 out.go:239] * 
	* 
	W0507 11:20:36.728273   12317 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:20:36.740569   12317 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.778903125s)

                                                
                                                
-- stdout --
	* [custom-flannel-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-359000" primary control-plane node in "custom-flannel-359000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-359000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:20:39.162188   12443 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:20:39.162330   12443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:20:39.162334   12443 out.go:304] Setting ErrFile to fd 2...
	I0507 11:20:39.162336   12443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:20:39.162457   12443 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:20:39.163408   12443 out.go:298] Setting JSON to false
	I0507 11:20:39.179702   12443 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6610,"bootTime":1715099429,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:20:39.179790   12443 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:20:39.184316   12443 out.go:177] * [custom-flannel-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:20:39.199091   12443 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:20:39.194223   12443 notify.go:220] Checking for updates...
	I0507 11:20:39.205100   12443 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:20:39.208087   12443 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:20:39.211102   12443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:20:39.214109   12443 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:20:39.217070   12443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:20:39.220481   12443 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:20:39.220549   12443 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:20:39.220590   12443 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:20:39.225054   12443 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:20:39.232076   12443 start.go:297] selected driver: qemu2
	I0507 11:20:39.232085   12443 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:20:39.232093   12443 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:20:39.234294   12443 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:20:39.237141   12443 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:20:39.240148   12443 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:20:39.240164   12443 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0507 11:20:39.240170   12443 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0507 11:20:39.240200   12443 start.go:340] cluster config:
	{Name:custom-flannel-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:20:39.244316   12443 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:20:39.252171   12443 out.go:177] * Starting "custom-flannel-359000" primary control-plane node in "custom-flannel-359000" cluster
	I0507 11:20:39.256091   12443 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:20:39.256104   12443 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:20:39.256107   12443 cache.go:56] Caching tarball of preloaded images
	I0507 11:20:39.256156   12443 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:20:39.256161   12443 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:20:39.256211   12443 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/custom-flannel-359000/config.json ...
	I0507 11:20:39.256222   12443 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/custom-flannel-359000/config.json: {Name:mkb67e2285dacb7056e4f7dab62bdfd3af649808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:20:39.256633   12443 start.go:360] acquireMachinesLock for custom-flannel-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:20:39.256669   12443 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "custom-flannel-359000"
	I0507 11:20:39.256678   12443 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:20:39.256701   12443 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:20:39.265083   12443 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:20:39.280042   12443 start.go:159] libmachine.API.Create for "custom-flannel-359000" (driver="qemu2")
	I0507 11:20:39.280066   12443 client.go:168] LocalClient.Create starting
	I0507 11:20:39.280131   12443 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:20:39.280164   12443 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:39.280173   12443 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:39.280211   12443 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:20:39.280232   12443 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:39.280237   12443 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:39.280690   12443 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:20:39.418473   12443 main.go:141] libmachine: Creating SSH key...
	I0507 11:20:39.458675   12443 main.go:141] libmachine: Creating Disk image...
	I0507 11:20:39.458681   12443 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:20:39.458855   12443 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/disk.qcow2
	I0507 11:20:39.471486   12443 main.go:141] libmachine: STDOUT: 
	I0507 11:20:39.471510   12443 main.go:141] libmachine: STDERR: 
	I0507 11:20:39.471563   12443 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/disk.qcow2 +20000M
	I0507 11:20:39.482400   12443 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:20:39.482421   12443 main.go:141] libmachine: STDERR: 
	I0507 11:20:39.482443   12443 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/disk.qcow2
	I0507 11:20:39.482449   12443 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:20:39.482480   12443 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:fb:f0:b7:c7:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/disk.qcow2
	I0507 11:20:39.484262   12443 main.go:141] libmachine: STDOUT: 
	I0507 11:20:39.484281   12443 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:20:39.484304   12443 client.go:171] duration metric: took 204.238459ms to LocalClient.Create
	I0507 11:20:41.486458   12443 start.go:128] duration metric: took 2.229785875s to createHost
	I0507 11:20:41.486624   12443 start.go:83] releasing machines lock for "custom-flannel-359000", held for 2.229917833s
	W0507 11:20:41.486719   12443 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:20:41.498077   12443 out.go:177] * Deleting "custom-flannel-359000" in qemu2 ...
	W0507 11:20:41.529050   12443 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:20:41.529084   12443 start.go:728] Will try again in 5 seconds ...
	I0507 11:20:46.529428   12443 start.go:360] acquireMachinesLock for custom-flannel-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:20:46.529971   12443 start.go:364] duration metric: took 432.959µs to acquireMachinesLock for "custom-flannel-359000"
	I0507 11:20:46.530108   12443 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:20:46.530340   12443 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:20:46.540072   12443 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:20:46.591149   12443 start.go:159] libmachine.API.Create for "custom-flannel-359000" (driver="qemu2")
	I0507 11:20:46.591201   12443 client.go:168] LocalClient.Create starting
	I0507 11:20:46.591316   12443 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:20:46.591385   12443 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:46.591404   12443 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:46.591472   12443 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:20:46.591515   12443 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:46.591526   12443 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:46.592105   12443 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:20:46.739699   12443 main.go:141] libmachine: Creating SSH key...
	I0507 11:20:46.842584   12443 main.go:141] libmachine: Creating Disk image...
	I0507 11:20:46.842590   12443 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:20:46.842753   12443 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/disk.qcow2
	I0507 11:20:46.855522   12443 main.go:141] libmachine: STDOUT: 
	I0507 11:20:46.855543   12443 main.go:141] libmachine: STDERR: 
	I0507 11:20:46.855601   12443 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/disk.qcow2 +20000M
	I0507 11:20:46.866660   12443 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:20:46.866677   12443 main.go:141] libmachine: STDERR: 
	I0507 11:20:46.866688   12443 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/disk.qcow2
	I0507 11:20:46.866693   12443 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:20:46.866741   12443 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:a4:70:b2:ec:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/custom-flannel-359000/disk.qcow2
	I0507 11:20:46.868513   12443 main.go:141] libmachine: STDOUT: 
	I0507 11:20:46.868533   12443 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:20:46.868549   12443 client.go:171] duration metric: took 277.351417ms to LocalClient.Create
	I0507 11:20:48.870558   12443 start.go:128] duration metric: took 2.340269167s to createHost
	I0507 11:20:48.870575   12443 start.go:83] releasing machines lock for "custom-flannel-359000", held for 2.340652708s
	W0507 11:20:48.870663   12443 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:20:48.882989   12443 out.go:177] 
	W0507 11:20:48.888963   12443 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:20:48.888983   12443 out.go:239] * 
	* 
	W0507 11:20:48.889549   12443 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:20:48.902970   12443 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.759055s)

                                                
                                                
-- stdout --
	* [false-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-359000" primary control-plane node in "false-359000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-359000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:20:51.267138   12572 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:20:51.267251   12572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:20:51.267253   12572 out.go:304] Setting ErrFile to fd 2...
	I0507 11:20:51.267256   12572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:20:51.267384   12572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:20:51.268489   12572 out.go:298] Setting JSON to false
	I0507 11:20:51.284533   12572 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6622,"bootTime":1715099429,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:20:51.284600   12572 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:20:51.289835   12572 out.go:177] * [false-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:20:51.297537   12572 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:20:51.301564   12572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:20:51.297603   12572 notify.go:220] Checking for updates...
	I0507 11:20:51.307482   12572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:20:51.310575   12572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:20:51.313452   12572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:20:51.316528   12572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:20:51.319837   12572 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:20:51.319905   12572 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:20:51.319958   12572 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:20:51.324479   12572 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:20:51.331534   12572 start.go:297] selected driver: qemu2
	I0507 11:20:51.331542   12572 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:20:51.331550   12572 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:20:51.333688   12572 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:20:51.336470   12572 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:20:51.339527   12572 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:20:51.339542   12572 cni.go:84] Creating CNI manager for "false"
	I0507 11:20:51.339571   12572 start.go:340] cluster config:
	{Name:false-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:false-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:20:51.343879   12572 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:20:51.351516   12572 out.go:177] * Starting "false-359000" primary control-plane node in "false-359000" cluster
	I0507 11:20:51.355564   12572 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:20:51.355579   12572 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:20:51.355586   12572 cache.go:56] Caching tarball of preloaded images
	I0507 11:20:51.355655   12572 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:20:51.355661   12572 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:20:51.355750   12572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/false-359000/config.json ...
	I0507 11:20:51.355766   12572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/false-359000/config.json: {Name:mkdf572beaa8cef516b0a6b4ba68486cd8660320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:20:51.356193   12572 start.go:360] acquireMachinesLock for false-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:20:51.356224   12572 start.go:364] duration metric: took 26.334µs to acquireMachinesLock for "false-359000"
	I0507 11:20:51.356234   12572 start.go:93] Provisioning new machine with config: &{Name:false-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:false-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:20:51.356264   12572 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:20:51.364495   12572 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:20:51.380728   12572 start.go:159] libmachine.API.Create for "false-359000" (driver="qemu2")
	I0507 11:20:51.380759   12572 client.go:168] LocalClient.Create starting
	I0507 11:20:51.380838   12572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:20:51.380873   12572 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:51.380912   12572 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:51.380950   12572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:20:51.380973   12572 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:51.380983   12572 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:51.381407   12572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:20:51.518101   12572 main.go:141] libmachine: Creating SSH key...
	I0507 11:20:51.560840   12572 main.go:141] libmachine: Creating Disk image...
	I0507 11:20:51.560845   12572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:20:51.561006   12572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/disk.qcow2
	I0507 11:20:51.573730   12572 main.go:141] libmachine: STDOUT: 
	I0507 11:20:51.573751   12572 main.go:141] libmachine: STDERR: 
	I0507 11:20:51.573810   12572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/disk.qcow2 +20000M
	I0507 11:20:51.585159   12572 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:20:51.585174   12572 main.go:141] libmachine: STDERR: 
	I0507 11:20:51.585194   12572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/disk.qcow2
	I0507 11:20:51.585199   12572 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:20:51.585232   12572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:d5:e4:6f:e7:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/disk.qcow2
	I0507 11:20:51.586961   12572 main.go:141] libmachine: STDOUT: 
	I0507 11:20:51.586975   12572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:20:51.586996   12572 client.go:171] duration metric: took 206.238ms to LocalClient.Create
	I0507 11:20:53.589244   12572 start.go:128] duration metric: took 2.232996417s to createHost
	I0507 11:20:53.589372   12572 start.go:83] releasing machines lock for "false-359000", held for 2.233201292s
	W0507 11:20:53.589432   12572 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:20:53.603811   12572 out.go:177] * Deleting "false-359000" in qemu2 ...
	W0507 11:20:53.629243   12572 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:20:53.629278   12572 start.go:728] Will try again in 5 seconds ...
	I0507 11:20:58.631297   12572 start.go:360] acquireMachinesLock for false-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:20:58.631775   12572 start.go:364] duration metric: took 412.083µs to acquireMachinesLock for "false-359000"
	I0507 11:20:58.631893   12572 start.go:93] Provisioning new machine with config: &{Name:false-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:false-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:20:58.632059   12572 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:20:58.640709   12572 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:20:58.672477   12572 start.go:159] libmachine.API.Create for "false-359000" (driver="qemu2")
	I0507 11:20:58.672528   12572 client.go:168] LocalClient.Create starting
	I0507 11:20:58.672630   12572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:20:58.672682   12572 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:58.672696   12572 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:58.672756   12572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:20:58.672790   12572 main.go:141] libmachine: Decoding PEM data...
	I0507 11:20:58.672799   12572 main.go:141] libmachine: Parsing certificate...
	I0507 11:20:58.673277   12572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:20:58.816738   12572 main.go:141] libmachine: Creating SSH key...
	I0507 11:20:58.932403   12572 main.go:141] libmachine: Creating Disk image...
	I0507 11:20:58.932410   12572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:20:58.932606   12572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/disk.qcow2
	I0507 11:20:58.945624   12572 main.go:141] libmachine: STDOUT: 
	I0507 11:20:58.945643   12572 main.go:141] libmachine: STDERR: 
	I0507 11:20:58.945721   12572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/disk.qcow2 +20000M
	I0507 11:20:58.956939   12572 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:20:58.956953   12572 main.go:141] libmachine: STDERR: 
	I0507 11:20:58.956965   12572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/disk.qcow2
	I0507 11:20:58.956970   12572 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:20:58.957003   12572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:f5:64:c3:95:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/false-359000/disk.qcow2
	I0507 11:20:58.958837   12572 main.go:141] libmachine: STDOUT: 
	I0507 11:20:58.958855   12572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:20:58.958867   12572 client.go:171] duration metric: took 286.342833ms to LocalClient.Create
	I0507 11:21:00.960909   12572 start.go:128] duration metric: took 2.328895833s to createHost
	I0507 11:21:00.960960   12572 start.go:83] releasing machines lock for "false-359000", held for 2.329235166s
	W0507 11:21:00.961175   12572 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:00.969560   12572 out.go:177] 
	W0507 11:21:00.976564   12572 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:21:00.976591   12572 out.go:239] * 
	* 
	W0507 11:21:00.977682   12572 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:21:00.989618   12572 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.807741833s)

                                                
                                                
-- stdout --
	* [kindnet-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-359000" primary control-plane node in "kindnet-359000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-359000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:21:03.159243   12685 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:21:03.159364   12685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:21:03.159366   12685 out.go:304] Setting ErrFile to fd 2...
	I0507 11:21:03.159369   12685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:21:03.159487   12685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:21:03.160556   12685 out.go:298] Setting JSON to false
	I0507 11:21:03.176654   12685 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6634,"bootTime":1715099429,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:21:03.176723   12685 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:21:03.182733   12685 out.go:177] * [kindnet-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:21:03.190626   12685 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:21:03.195660   12685 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:21:03.190683   12685 notify.go:220] Checking for updates...
	I0507 11:21:03.205666   12685 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:21:03.208620   12685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:21:03.211585   12685 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:21:03.214643   12685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:21:03.218048   12685 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:21:03.218129   12685 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:21:03.218176   12685 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:21:03.222593   12685 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:21:03.229628   12685 start.go:297] selected driver: qemu2
	I0507 11:21:03.229635   12685 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:21:03.229641   12685 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:21:03.231893   12685 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:21:03.235593   12685 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:21:03.238655   12685 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:21:03.238671   12685 cni.go:84] Creating CNI manager for "kindnet"
	I0507 11:21:03.238675   12685 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0507 11:21:03.238711   12685 start.go:340] cluster config:
	{Name:kindnet-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:21:03.243294   12685 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:21:03.250605   12685 out.go:177] * Starting "kindnet-359000" primary control-plane node in "kindnet-359000" cluster
	I0507 11:21:03.254649   12685 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:21:03.254668   12685 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:21:03.254678   12685 cache.go:56] Caching tarball of preloaded images
	I0507 11:21:03.254747   12685 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:21:03.254752   12685 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:21:03.254825   12685 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/kindnet-359000/config.json ...
	I0507 11:21:03.254837   12685 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/kindnet-359000/config.json: {Name:mkbe3fd22091513f843fc2abf0718afebe079f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:21:03.255274   12685 start.go:360] acquireMachinesLock for kindnet-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:21:03.255305   12685 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "kindnet-359000"
	I0507 11:21:03.255316   12685 start.go:93] Provisioning new machine with config: &{Name:kindnet-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kindnet-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:21:03.255376   12685 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:21:03.263609   12685 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:21:03.279789   12685 start.go:159] libmachine.API.Create for "kindnet-359000" (driver="qemu2")
	I0507 11:21:03.279819   12685 client.go:168] LocalClient.Create starting
	I0507 11:21:03.279891   12685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:21:03.279919   12685 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:03.279931   12685 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:03.279971   12685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:21:03.279993   12685 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:03.279999   12685 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:03.280459   12685 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:21:03.417768   12685 main.go:141] libmachine: Creating SSH key...
	I0507 11:21:03.494949   12685 main.go:141] libmachine: Creating Disk image...
	I0507 11:21:03.494954   12685 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:21:03.495113   12685 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/disk.qcow2
	I0507 11:21:03.507820   12685 main.go:141] libmachine: STDOUT: 
	I0507 11:21:03.507846   12685 main.go:141] libmachine: STDERR: 
	I0507 11:21:03.507903   12685 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/disk.qcow2 +20000M
	I0507 11:21:03.519502   12685 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:21:03.519521   12685 main.go:141] libmachine: STDERR: 
	I0507 11:21:03.519542   12685 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/disk.qcow2
	I0507 11:21:03.519547   12685 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:21:03.519579   12685 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:51:7e:1f:a3:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/disk.qcow2
	I0507 11:21:03.521327   12685 main.go:141] libmachine: STDOUT: 
	I0507 11:21:03.521341   12685 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:21:03.521358   12685 client.go:171] duration metric: took 241.540833ms to LocalClient.Create
	I0507 11:21:05.523592   12685 start.go:128] duration metric: took 2.268238s to createHost
	I0507 11:21:05.523733   12685 start.go:83] releasing machines lock for "kindnet-359000", held for 2.268482375s
	W0507 11:21:05.523813   12685 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:05.531393   12685 out.go:177] * Deleting "kindnet-359000" in qemu2 ...
	W0507 11:21:05.559548   12685 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:05.559584   12685 start.go:728] Will try again in 5 seconds ...
	I0507 11:21:10.561626   12685 start.go:360] acquireMachinesLock for kindnet-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:21:10.562100   12685 start.go:364] duration metric: took 404.5µs to acquireMachinesLock for "kindnet-359000"
	I0507 11:21:10.562217   12685 start.go:93] Provisioning new machine with config: &{Name:kindnet-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kindnet-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:21:10.562468   12685 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:21:10.567197   12685 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:21:10.614265   12685 start.go:159] libmachine.API.Create for "kindnet-359000" (driver="qemu2")
	I0507 11:21:10.614318   12685 client.go:168] LocalClient.Create starting
	I0507 11:21:10.614442   12685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:21:10.614506   12685 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:10.614522   12685 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:10.614595   12685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:21:10.614639   12685 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:10.614677   12685 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:10.615334   12685 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:21:10.763394   12685 main.go:141] libmachine: Creating SSH key...
	I0507 11:21:10.865227   12685 main.go:141] libmachine: Creating Disk image...
	I0507 11:21:10.865233   12685 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:21:10.865393   12685 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/disk.qcow2
	I0507 11:21:10.878201   12685 main.go:141] libmachine: STDOUT: 
	I0507 11:21:10.878230   12685 main.go:141] libmachine: STDERR: 
	I0507 11:21:10.878300   12685 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/disk.qcow2 +20000M
	I0507 11:21:10.889801   12685 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:21:10.889827   12685 main.go:141] libmachine: STDERR: 
	I0507 11:21:10.889849   12685 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/disk.qcow2
	I0507 11:21:10.889854   12685 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:21:10.889885   12685 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:1f:86:fc:71:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kindnet-359000/disk.qcow2
	I0507 11:21:10.891636   12685 main.go:141] libmachine: STDOUT: 
	I0507 11:21:10.891654   12685 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:21:10.891667   12685 client.go:171] duration metric: took 277.351042ms to LocalClient.Create
	I0507 11:21:12.893477   12685 start.go:128] duration metric: took 2.3310455s to createHost
	I0507 11:21:12.893532   12685 start.go:83] releasing machines lock for "kindnet-359000", held for 2.331483791s
	W0507 11:21:12.893666   12685 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:12.907091   12685 out.go:177] 
	W0507 11:21:12.912095   12685 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:21:12.912105   12685 out.go:239] * 
	* 
	W0507 11:21:12.913131   12685 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:21:12.928007   12685 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.95874125s)

                                                
                                                
-- stdout --
	* [flannel-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-359000" primary control-plane node in "flannel-359000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-359000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:21:15.178290   12802 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:21:15.178419   12802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:21:15.178423   12802 out.go:304] Setting ErrFile to fd 2...
	I0507 11:21:15.178425   12802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:21:15.178556   12802 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:21:15.179712   12802 out.go:298] Setting JSON to false
	I0507 11:21:15.195656   12802 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6646,"bootTime":1715099429,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:21:15.195724   12802 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:21:15.200976   12802 out.go:177] * [flannel-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:21:15.208646   12802 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:21:15.208686   12802 notify.go:220] Checking for updates...
	I0507 11:21:15.211644   12802 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:21:15.215596   12802 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:21:15.218640   12802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:21:15.221653   12802 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:21:15.224579   12802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:21:15.227895   12802 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:21:15.227966   12802 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:21:15.228020   12802 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:21:15.232657   12802 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:21:15.239615   12802 start.go:297] selected driver: qemu2
	I0507 11:21:15.239622   12802 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:21:15.239627   12802 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:21:15.241777   12802 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:21:15.244626   12802 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:21:15.247662   12802 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:21:15.247688   12802 cni.go:84] Creating CNI manager for "flannel"
	I0507 11:21:15.247704   12802 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0507 11:21:15.247742   12802 start.go:340] cluster config:
	{Name:flannel-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:21:15.252554   12802 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:21:15.259622   12802 out.go:177] * Starting "flannel-359000" primary control-plane node in "flannel-359000" cluster
	I0507 11:21:15.263561   12802 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:21:15.263578   12802 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:21:15.263584   12802 cache.go:56] Caching tarball of preloaded images
	I0507 11:21:15.263636   12802 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:21:15.263641   12802 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:21:15.263713   12802 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/flannel-359000/config.json ...
	I0507 11:21:15.263724   12802 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/flannel-359000/config.json: {Name:mk70fba18f8f7678ccea1c2419fd8e2167f91522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:21:15.263920   12802 start.go:360] acquireMachinesLock for flannel-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:21:15.263949   12802 start.go:364] duration metric: took 24.208µs to acquireMachinesLock for "flannel-359000"
	I0507 11:21:15.263961   12802 start.go:93] Provisioning new machine with config: &{Name:flannel-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:flannel-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:21:15.263984   12802 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:21:15.267441   12802 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:21:15.282247   12802 start.go:159] libmachine.API.Create for "flannel-359000" (driver="qemu2")
	I0507 11:21:15.282270   12802 client.go:168] LocalClient.Create starting
	I0507 11:21:15.282330   12802 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:21:15.282360   12802 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:15.282372   12802 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:15.282411   12802 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:21:15.282434   12802 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:15.282444   12802 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:15.282838   12802 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:21:15.422381   12802 main.go:141] libmachine: Creating SSH key...
	I0507 11:21:15.595569   12802 main.go:141] libmachine: Creating Disk image...
	I0507 11:21:15.595582   12802 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:21:15.595787   12802 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/disk.qcow2
	I0507 11:21:15.608786   12802 main.go:141] libmachine: STDOUT: 
	I0507 11:21:15.608806   12802 main.go:141] libmachine: STDERR: 
	I0507 11:21:15.608856   12802 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/disk.qcow2 +20000M
	I0507 11:21:15.619784   12802 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:21:15.619798   12802 main.go:141] libmachine: STDERR: 
	I0507 11:21:15.619818   12802 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/disk.qcow2
	I0507 11:21:15.619822   12802 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:21:15.619866   12802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:f3:f3:21:b0:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/disk.qcow2
	I0507 11:21:15.621617   12802 main.go:141] libmachine: STDOUT: 
	I0507 11:21:15.621631   12802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:21:15.621653   12802 client.go:171] duration metric: took 339.387791ms to LocalClient.Create
	I0507 11:21:17.623771   12802 start.go:128] duration metric: took 2.359825584s to createHost
	I0507 11:21:17.623845   12802 start.go:83] releasing machines lock for "flannel-359000", held for 2.359955625s
	W0507 11:21:17.623946   12802 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:17.635952   12802 out.go:177] * Deleting "flannel-359000" in qemu2 ...
	W0507 11:21:17.658751   12802 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:17.658780   12802 start.go:728] Will try again in 5 seconds ...
	I0507 11:21:22.660920   12802 start.go:360] acquireMachinesLock for flannel-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:21:22.661172   12802 start.go:364] duration metric: took 195.041µs to acquireMachinesLock for "flannel-359000"
	I0507 11:21:22.661241   12802 start.go:93] Provisioning new machine with config: &{Name:flannel-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:flannel-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:21:22.661383   12802 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:21:22.666781   12802 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:21:22.701830   12802 start.go:159] libmachine.API.Create for "flannel-359000" (driver="qemu2")
	I0507 11:21:22.701865   12802 client.go:168] LocalClient.Create starting
	I0507 11:21:22.701973   12802 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:21:22.702045   12802 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:22.702060   12802 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:22.702115   12802 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:21:22.702154   12802 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:22.702164   12802 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:22.702673   12802 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:21:22.848853   12802 main.go:141] libmachine: Creating SSH key...
	I0507 11:21:23.043653   12802 main.go:141] libmachine: Creating Disk image...
	I0507 11:21:23.043668   12802 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:21:23.043851   12802 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/disk.qcow2
	I0507 11:21:23.057632   12802 main.go:141] libmachine: STDOUT: 
	I0507 11:21:23.057666   12802 main.go:141] libmachine: STDERR: 
	I0507 11:21:23.057761   12802 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/disk.qcow2 +20000M
	I0507 11:21:23.069126   12802 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:21:23.069141   12802 main.go:141] libmachine: STDERR: 
	I0507 11:21:23.069163   12802 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/disk.qcow2
	I0507 11:21:23.069172   12802 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:21:23.069216   12802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:1e:f9:e8:39:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/flannel-359000/disk.qcow2
	I0507 11:21:23.070955   12802 main.go:141] libmachine: STDOUT: 
	I0507 11:21:23.070970   12802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:21:23.070989   12802 client.go:171] duration metric: took 369.130083ms to LocalClient.Create
	I0507 11:21:25.073125   12802 start.go:128] duration metric: took 2.411779583s to createHost
	I0507 11:21:25.073195   12802 start.go:83] releasing machines lock for "flannel-359000", held for 2.412078958s
	W0507 11:21:25.073555   12802 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:25.083084   12802 out.go:177] 
	W0507 11:21:25.087187   12802 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:21:25.087233   12802 out.go:239] * 
	* 
	W0507 11:21:25.089125   12802 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:21:25.097005   12802 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.819077959s)

                                                
                                                
-- stdout --
	* [enable-default-cni-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-359000" primary control-plane node in "enable-default-cni-359000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-359000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:21:27.422302   12926 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:21:27.422424   12926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:21:27.422429   12926 out.go:304] Setting ErrFile to fd 2...
	I0507 11:21:27.422431   12926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:21:27.422575   12926 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:21:27.423528   12926 out.go:298] Setting JSON to false
	I0507 11:21:27.439605   12926 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6658,"bootTime":1715099429,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:21:27.439700   12926 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:21:27.446293   12926 out.go:177] * [enable-default-cni-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:21:27.453184   12926 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:21:27.458233   12926 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:21:27.453212   12926 notify.go:220] Checking for updates...
	I0507 11:21:27.461117   12926 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:21:27.464135   12926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:21:27.467159   12926 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:21:27.470110   12926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:21:27.473499   12926 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:21:27.473565   12926 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:21:27.473626   12926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:21:27.478226   12926 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:21:27.485170   12926 start.go:297] selected driver: qemu2
	I0507 11:21:27.485176   12926 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:21:27.485182   12926 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:21:27.487489   12926 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:21:27.490228   12926 out.go:177] * Automatically selected the socket_vmnet network
	E0507 11:21:27.491432   12926 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0507 11:21:27.491444   12926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:21:27.491457   12926 cni.go:84] Creating CNI manager for "bridge"
	I0507 11:21:27.491462   12926 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 11:21:27.491488   12926 start.go:340] cluster config:
	{Name:enable-default-cni-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:21:27.495855   12926 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:21:27.503202   12926 out.go:177] * Starting "enable-default-cni-359000" primary control-plane node in "enable-default-cni-359000" cluster
	I0507 11:21:27.507198   12926 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:21:27.507213   12926 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:21:27.507221   12926 cache.go:56] Caching tarball of preloaded images
	I0507 11:21:27.507290   12926 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:21:27.507296   12926 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:21:27.507370   12926 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/enable-default-cni-359000/config.json ...
	I0507 11:21:27.507385   12926 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/enable-default-cni-359000/config.json: {Name:mk0494d3b98d3cc2dc6ef558cce6c7391a7cbc2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:21:27.507707   12926 start.go:360] acquireMachinesLock for enable-default-cni-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:21:27.507742   12926 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "enable-default-cni-359000"
	I0507 11:21:27.507754   12926 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:21:27.507779   12926 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:21:27.516157   12926 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:21:27.531689   12926 start.go:159] libmachine.API.Create for "enable-default-cni-359000" (driver="qemu2")
	I0507 11:21:27.531715   12926 client.go:168] LocalClient.Create starting
	I0507 11:21:27.531778   12926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:21:27.531807   12926 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:27.531815   12926 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:27.531855   12926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:21:27.531877   12926 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:27.531882   12926 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:27.532288   12926 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:21:27.669852   12926 main.go:141] libmachine: Creating SSH key...
	I0507 11:21:27.737774   12926 main.go:141] libmachine: Creating Disk image...
	I0507 11:21:27.737779   12926 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:21:27.737965   12926 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/disk.qcow2
	I0507 11:21:27.750559   12926 main.go:141] libmachine: STDOUT: 
	I0507 11:21:27.750590   12926 main.go:141] libmachine: STDERR: 
	I0507 11:21:27.750657   12926 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/disk.qcow2 +20000M
	I0507 11:21:27.761640   12926 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:21:27.761657   12926 main.go:141] libmachine: STDERR: 
	I0507 11:21:27.761681   12926 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/disk.qcow2
	I0507 11:21:27.761687   12926 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:21:27.761718   12926 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:2f:76:c5:1f:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/disk.qcow2
	I0507 11:21:27.763454   12926 main.go:141] libmachine: STDOUT: 
	I0507 11:21:27.763468   12926 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:21:27.763491   12926 client.go:171] duration metric: took 231.778792ms to LocalClient.Create
	I0507 11:21:29.765631   12926 start.go:128] duration metric: took 2.257891917s to createHost
	I0507 11:21:29.765701   12926 start.go:83] releasing machines lock for "enable-default-cni-359000", held for 2.258013958s
	W0507 11:21:29.765795   12926 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:29.779817   12926 out.go:177] * Deleting "enable-default-cni-359000" in qemu2 ...
	W0507 11:21:29.805819   12926 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:29.805854   12926 start.go:728] Will try again in 5 seconds ...
	I0507 11:21:34.806433   12926 start.go:360] acquireMachinesLock for enable-default-cni-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:21:34.806539   12926 start.go:364] duration metric: took 85.166µs to acquireMachinesLock for "enable-default-cni-359000"
	I0507 11:21:34.806553   12926 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:21:34.806631   12926 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:21:34.812843   12926 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:21:34.828743   12926 start.go:159] libmachine.API.Create for "enable-default-cni-359000" (driver="qemu2")
	I0507 11:21:34.828774   12926 client.go:168] LocalClient.Create starting
	I0507 11:21:34.828849   12926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:21:34.828885   12926 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:34.828895   12926 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:34.828938   12926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:21:34.828961   12926 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:34.828978   12926 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:34.829391   12926 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:21:34.967159   12926 main.go:141] libmachine: Creating SSH key...
	I0507 11:21:35.141609   12926 main.go:141] libmachine: Creating Disk image...
	I0507 11:21:35.141617   12926 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:21:35.141821   12926 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/disk.qcow2
	I0507 11:21:35.154455   12926 main.go:141] libmachine: STDOUT: 
	I0507 11:21:35.154485   12926 main.go:141] libmachine: STDERR: 
	I0507 11:21:35.154541   12926 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/disk.qcow2 +20000M
	I0507 11:21:35.165666   12926 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:21:35.165687   12926 main.go:141] libmachine: STDERR: 
	I0507 11:21:35.165699   12926 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/disk.qcow2
	I0507 11:21:35.165715   12926 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:21:35.165748   12926 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:b5:d1:9e:62:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/enable-default-cni-359000/disk.qcow2
	I0507 11:21:35.167530   12926 main.go:141] libmachine: STDOUT: 
	I0507 11:21:35.167549   12926 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:21:35.167562   12926 client.go:171] duration metric: took 338.790917ms to LocalClient.Create
	I0507 11:21:37.169724   12926 start.go:128] duration metric: took 2.363126709s to createHost
	I0507 11:21:37.169808   12926 start.go:83] releasing machines lock for "enable-default-cni-359000", held for 2.363326292s
	W0507 11:21:37.170304   12926 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:37.179854   12926 out.go:177] 
	W0507 11:21:37.184106   12926 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:21:37.184145   12926 out.go:239] * 
	* 
	W0507 11:21:37.186754   12926 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:21:37.197071   12926 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.82213075s)

                                                
                                                
-- stdout --
	* [bridge-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-359000" primary control-plane node in "bridge-359000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-359000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:21:39.354813   13037 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:21:39.354947   13037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:21:39.354952   13037 out.go:304] Setting ErrFile to fd 2...
	I0507 11:21:39.354954   13037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:21:39.355081   13037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:21:39.356217   13037 out.go:298] Setting JSON to false
	I0507 11:21:39.372533   13037 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6670,"bootTime":1715099429,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:21:39.372598   13037 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:21:39.377370   13037 out.go:177] * [bridge-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:21:39.384182   13037 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:21:39.384250   13037 notify.go:220] Checking for updates...
	I0507 11:21:39.391320   13037 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:21:39.394291   13037 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:21:39.397249   13037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:21:39.400311   13037 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:21:39.401741   13037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:21:39.405663   13037 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:21:39.405732   13037 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:21:39.405782   13037 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:21:39.410336   13037 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:21:39.415283   13037 start.go:297] selected driver: qemu2
	I0507 11:21:39.415290   13037 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:21:39.415297   13037 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:21:39.417554   13037 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:21:39.420285   13037 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:21:39.423334   13037 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:21:39.423354   13037 cni.go:84] Creating CNI manager for "bridge"
	I0507 11:21:39.423358   13037 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 11:21:39.423395   13037 start.go:340] cluster config:
	{Name:bridge-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:21:39.427697   13037 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:21:39.435272   13037 out.go:177] * Starting "bridge-359000" primary control-plane node in "bridge-359000" cluster
	I0507 11:21:39.439253   13037 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:21:39.439266   13037 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:21:39.439274   13037 cache.go:56] Caching tarball of preloaded images
	I0507 11:21:39.439339   13037 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:21:39.439345   13037 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:21:39.439405   13037 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/bridge-359000/config.json ...
	I0507 11:21:39.439416   13037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/bridge-359000/config.json: {Name:mk1735f0d752c115d599499340379c9d8635adf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:21:39.439708   13037 start.go:360] acquireMachinesLock for bridge-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:21:39.439740   13037 start.go:364] duration metric: took 26.083µs to acquireMachinesLock for "bridge-359000"
	I0507 11:21:39.439751   13037 start.go:93] Provisioning new machine with config: &{Name:bridge-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:bridge-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:21:39.439779   13037 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:21:39.448280   13037 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:21:39.463727   13037 start.go:159] libmachine.API.Create for "bridge-359000" (driver="qemu2")
	I0507 11:21:39.463755   13037 client.go:168] LocalClient.Create starting
	I0507 11:21:39.463816   13037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:21:39.463849   13037 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:39.463861   13037 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:39.463904   13037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:21:39.463932   13037 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:39.463938   13037 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:39.464276   13037 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:21:39.603541   13037 main.go:141] libmachine: Creating SSH key...
	I0507 11:21:39.722302   13037 main.go:141] libmachine: Creating Disk image...
	I0507 11:21:39.722308   13037 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:21:39.722595   13037 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/disk.qcow2
	I0507 11:21:39.735123   13037 main.go:141] libmachine: STDOUT: 
	I0507 11:21:39.735146   13037 main.go:141] libmachine: STDERR: 
	I0507 11:21:39.735207   13037 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/disk.qcow2 +20000M
	I0507 11:21:39.746280   13037 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:21:39.746298   13037 main.go:141] libmachine: STDERR: 
	I0507 11:21:39.746315   13037 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/disk.qcow2
	I0507 11:21:39.746331   13037 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:21:39.746372   13037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:40:e3:07:0c:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/disk.qcow2
	I0507 11:21:39.748122   13037 main.go:141] libmachine: STDOUT: 
	I0507 11:21:39.748140   13037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:21:39.748172   13037 client.go:171] duration metric: took 284.411083ms to LocalClient.Create
	I0507 11:21:41.750357   13037 start.go:128] duration metric: took 2.310615875s to createHost
	I0507 11:21:41.750418   13037 start.go:83] releasing machines lock for "bridge-359000", held for 2.310734709s
	W0507 11:21:41.750484   13037 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:41.763724   13037 out.go:177] * Deleting "bridge-359000" in qemu2 ...
	W0507 11:21:41.795128   13037 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:41.795237   13037 start.go:728] Will try again in 5 seconds ...
	I0507 11:21:46.797312   13037 start.go:360] acquireMachinesLock for bridge-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:21:46.797880   13037 start.go:364] duration metric: took 417.417µs to acquireMachinesLock for "bridge-359000"
	I0507 11:21:46.797956   13037 start.go:93] Provisioning new machine with config: &{Name:bridge-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:bridge-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:21:46.798231   13037 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:21:46.806851   13037 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:21:46.853699   13037 start.go:159] libmachine.API.Create for "bridge-359000" (driver="qemu2")
	I0507 11:21:46.853759   13037 client.go:168] LocalClient.Create starting
	I0507 11:21:46.853885   13037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:21:46.853954   13037 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:46.853971   13037 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:46.854028   13037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:21:46.854071   13037 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:46.854083   13037 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:46.854648   13037 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:21:47.003844   13037 main.go:141] libmachine: Creating SSH key...
	I0507 11:21:47.078142   13037 main.go:141] libmachine: Creating Disk image...
	I0507 11:21:47.078147   13037 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:21:47.078319   13037 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/disk.qcow2
	I0507 11:21:47.091076   13037 main.go:141] libmachine: STDOUT: 
	I0507 11:21:47.091100   13037 main.go:141] libmachine: STDERR: 
	I0507 11:21:47.091150   13037 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/disk.qcow2 +20000M
	I0507 11:21:47.102160   13037 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:21:47.102175   13037 main.go:141] libmachine: STDERR: 
	I0507 11:21:47.102190   13037 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/disk.qcow2
	I0507 11:21:47.102195   13037 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:21:47.102231   13037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:09:10:6a:18:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/bridge-359000/disk.qcow2
	I0507 11:21:47.103945   13037 main.go:141] libmachine: STDOUT: 
	I0507 11:21:47.103963   13037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:21:47.103977   13037 client.go:171] duration metric: took 250.218292ms to LocalClient.Create
	I0507 11:21:49.106157   13037 start.go:128] duration metric: took 2.307949791s to createHost
	I0507 11:21:49.106240   13037 start.go:83] releasing machines lock for "bridge-359000", held for 2.308402125s
	W0507 11:21:49.106758   13037 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:49.116405   13037 out.go:177] 
	W0507 11:21:49.123495   13037 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:21:49.123519   13037 out.go:239] * 
	* 
	W0507 11:21:49.126210   13037 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:21:49.138399   13037 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-359000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.836834084s)

                                                
                                                
-- stdout --
	* [kubenet-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-359000" primary control-plane node in "kubenet-359000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-359000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:21:51.372594   13154 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:21:51.372727   13154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:21:51.372730   13154 out.go:304] Setting ErrFile to fd 2...
	I0507 11:21:51.372732   13154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:21:51.372873   13154 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:21:51.373916   13154 out.go:298] Setting JSON to false
	I0507 11:21:51.389921   13154 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6682,"bootTime":1715099429,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:21:51.389999   13154 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:21:51.396591   13154 out.go:177] * [kubenet-359000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:21:51.404559   13154 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:21:51.409571   13154 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:21:51.404616   13154 notify.go:220] Checking for updates...
	I0507 11:21:51.415573   13154 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:21:51.418578   13154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:21:51.421548   13154 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:21:51.424526   13154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:21:51.427859   13154 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:21:51.427927   13154 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:21:51.427983   13154 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:21:51.432562   13154 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:21:51.439527   13154 start.go:297] selected driver: qemu2
	I0507 11:21:51.439540   13154 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:21:51.439547   13154 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:21:51.441792   13154 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:21:51.445540   13154 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:21:51.448697   13154 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:21:51.448723   13154 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0507 11:21:51.448752   13154 start.go:340] cluster config:
	{Name:kubenet-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubenet-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:21:51.453393   13154 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:21:51.460535   13154 out.go:177] * Starting "kubenet-359000" primary control-plane node in "kubenet-359000" cluster
	I0507 11:21:51.464530   13154 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:21:51.464547   13154 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:21:51.464557   13154 cache.go:56] Caching tarball of preloaded images
	I0507 11:21:51.464633   13154 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:21:51.464639   13154 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:21:51.464705   13154 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/kubenet-359000/config.json ...
	I0507 11:21:51.464718   13154 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/kubenet-359000/config.json: {Name:mk5bbd19e2d6394b9004f8ba34dce0bb63d9fdce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:21:51.464940   13154 start.go:360] acquireMachinesLock for kubenet-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:21:51.464974   13154 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "kubenet-359000"
	I0507 11:21:51.464986   13154 start.go:93] Provisioning new machine with config: &{Name:kubenet-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kubenet-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:21:51.465011   13154 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:21:51.472475   13154 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:21:51.489992   13154 start.go:159] libmachine.API.Create for "kubenet-359000" (driver="qemu2")
	I0507 11:21:51.490024   13154 client.go:168] LocalClient.Create starting
	I0507 11:21:51.490095   13154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:21:51.490126   13154 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:51.490136   13154 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:51.490186   13154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:21:51.490212   13154 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:51.490220   13154 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:51.490609   13154 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:21:51.631669   13154 main.go:141] libmachine: Creating SSH key...
	I0507 11:21:51.734414   13154 main.go:141] libmachine: Creating Disk image...
	I0507 11:21:51.734420   13154 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:21:51.734579   13154 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/disk.qcow2
	I0507 11:21:51.747212   13154 main.go:141] libmachine: STDOUT: 
	I0507 11:21:51.747237   13154 main.go:141] libmachine: STDERR: 
	I0507 11:21:51.747294   13154 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/disk.qcow2 +20000M
	I0507 11:21:51.758387   13154 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:21:51.758404   13154 main.go:141] libmachine: STDERR: 
	I0507 11:21:51.758416   13154 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/disk.qcow2
	I0507 11:21:51.758421   13154 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:21:51.758461   13154 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:58:0a:29:b6:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/disk.qcow2
	I0507 11:21:51.760241   13154 main.go:141] libmachine: STDOUT: 
	I0507 11:21:51.760260   13154 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:21:51.760286   13154 client.go:171] duration metric: took 270.263333ms to LocalClient.Create
	I0507 11:21:53.762578   13154 start.go:128] duration metric: took 2.297478583s to createHost
	I0507 11:21:53.762654   13154 start.go:83] releasing machines lock for "kubenet-359000", held for 2.297736292s
	W0507 11:21:53.762781   13154 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:53.773885   13154 out.go:177] * Deleting "kubenet-359000" in qemu2 ...
	W0507 11:21:53.800483   13154 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:21:53.800514   13154 start.go:728] Will try again in 5 seconds ...
	I0507 11:21:58.800894   13154 start.go:360] acquireMachinesLock for kubenet-359000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:21:58.801319   13154 start.go:364] duration metric: took 333.75µs to acquireMachinesLock for "kubenet-359000"
	I0507 11:21:58.801430   13154 start.go:93] Provisioning new machine with config: &{Name:kubenet-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kubenet-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:21:58.801628   13154 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:21:58.806256   13154 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0507 11:21:58.848470   13154 start.go:159] libmachine.API.Create for "kubenet-359000" (driver="qemu2")
	I0507 11:21:58.848610   13154 client.go:168] LocalClient.Create starting
	I0507 11:21:58.848730   13154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:21:58.848807   13154 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:58.848826   13154 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:58.848888   13154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:21:58.848932   13154 main.go:141] libmachine: Decoding PEM data...
	I0507 11:21:58.848947   13154 main.go:141] libmachine: Parsing certificate...
	I0507 11:21:58.849913   13154 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:21:58.994770   13154 main.go:141] libmachine: Creating SSH key...
	I0507 11:21:59.121204   13154 main.go:141] libmachine: Creating Disk image...
	I0507 11:21:59.121212   13154 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:21:59.121383   13154 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/disk.qcow2
	I0507 11:21:59.134156   13154 main.go:141] libmachine: STDOUT: 
	I0507 11:21:59.134180   13154 main.go:141] libmachine: STDERR: 
	I0507 11:21:59.134252   13154 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/disk.qcow2 +20000M
	I0507 11:21:59.145205   13154 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:21:59.145223   13154 main.go:141] libmachine: STDERR: 
	I0507 11:21:59.145247   13154 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/disk.qcow2
	I0507 11:21:59.145252   13154 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:21:59.145285   13154 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:4c:0b:21:49:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/kubenet-359000/disk.qcow2
	I0507 11:21:59.146975   13154 main.go:141] libmachine: STDOUT: 
	I0507 11:21:59.146990   13154 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:21:59.147010   13154 client.go:171] duration metric: took 298.395458ms to LocalClient.Create
	I0507 11:22:01.149049   13154 start.go:128] duration metric: took 2.347464458s to createHost
	I0507 11:22:01.149077   13154 start.go:83] releasing machines lock for "kubenet-359000", held for 2.347811834s
	W0507 11:22:01.149239   13154 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:01.157523   13154 out.go:177] 
	W0507 11:22:01.164524   13154 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:22:01.164530   13154 out.go:239] * 
	* 
	W0507 11:22:01.165040   13154 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:22:01.174526   13154 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-301000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-301000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.938116417s)

                                                
                                                
-- stdout --
	* [old-k8s-version-301000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-301000" primary control-plane node in "old-k8s-version-301000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-301000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:22:03.349661   13268 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:22:03.349798   13268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:03.349802   13268 out.go:304] Setting ErrFile to fd 2...
	I0507 11:22:03.349804   13268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:03.349919   13268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:22:03.351085   13268 out.go:298] Setting JSON to false
	I0507 11:22:03.367621   13268 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6694,"bootTime":1715099429,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:22:03.367682   13268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:22:03.374505   13268 out.go:177] * [old-k8s-version-301000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:22:03.382326   13268 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:22:03.386354   13268 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:22:03.382393   13268 notify.go:220] Checking for updates...
	I0507 11:22:03.392233   13268 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:22:03.395262   13268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:22:03.398331   13268 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:22:03.401303   13268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:22:03.404726   13268 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:22:03.404799   13268 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:22:03.404851   13268 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:22:03.409234   13268 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:22:03.416224   13268 start.go:297] selected driver: qemu2
	I0507 11:22:03.416230   13268 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:22:03.416236   13268 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:22:03.418442   13268 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:22:03.422378   13268 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:22:03.426285   13268 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:22:03.426304   13268 cni.go:84] Creating CNI manager for ""
	I0507 11:22:03.426310   13268 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0507 11:22:03.426339   13268 start.go:340] cluster config:
	{Name:old-k8s-version-301000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:22:03.430719   13268 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:03.438179   13268 out.go:177] * Starting "old-k8s-version-301000" primary control-plane node in "old-k8s-version-301000" cluster
	I0507 11:22:03.442303   13268 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0507 11:22:03.442317   13268 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0507 11:22:03.442322   13268 cache.go:56] Caching tarball of preloaded images
	I0507 11:22:03.442372   13268 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:22:03.442377   13268 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0507 11:22:03.442425   13268 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/old-k8s-version-301000/config.json ...
	I0507 11:22:03.442435   13268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/old-k8s-version-301000/config.json: {Name:mke22ed1c3c3dc67c754f511c8bc2a2ac0c670b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:22:03.442729   13268 start.go:360] acquireMachinesLock for old-k8s-version-301000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:03.442769   13268 start.go:364] duration metric: took 31.416µs to acquireMachinesLock for "old-k8s-version-301000"
	I0507 11:22:03.442781   13268 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:22:03.442809   13268 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:22:03.450305   13268 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:22:03.465175   13268 start.go:159] libmachine.API.Create for "old-k8s-version-301000" (driver="qemu2")
	I0507 11:22:03.465204   13268 client.go:168] LocalClient.Create starting
	I0507 11:22:03.465262   13268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:22:03.465294   13268 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:03.465304   13268 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:03.465349   13268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:22:03.465370   13268 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:03.465376   13268 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:03.465715   13268 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:22:03.604172   13268 main.go:141] libmachine: Creating SSH key...
	I0507 11:22:03.785092   13268 main.go:141] libmachine: Creating Disk image...
	I0507 11:22:03.785101   13268 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:22:03.785268   13268 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2
	I0507 11:22:03.797766   13268 main.go:141] libmachine: STDOUT: 
	I0507 11:22:03.797792   13268 main.go:141] libmachine: STDERR: 
	I0507 11:22:03.797846   13268 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2 +20000M
	I0507 11:22:03.809111   13268 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:22:03.809129   13268 main.go:141] libmachine: STDERR: 
	I0507 11:22:03.809151   13268 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2
	I0507 11:22:03.809156   13268 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:22:03.809195   13268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:a8:42:78:f5:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2
	I0507 11:22:03.810992   13268 main.go:141] libmachine: STDOUT: 
	I0507 11:22:03.811010   13268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:03.811035   13268 client.go:171] duration metric: took 345.836625ms to LocalClient.Create
	I0507 11:22:05.813037   13268 start.go:128] duration metric: took 2.370291s to createHost
	I0507 11:22:05.813049   13268 start.go:83] releasing machines lock for "old-k8s-version-301000", held for 2.370343709s
	W0507 11:22:05.813066   13268 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:05.824007   13268 out.go:177] * Deleting "old-k8s-version-301000" in qemu2 ...
	W0507 11:22:05.841901   13268 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:05.841910   13268 start.go:728] Will try again in 5 seconds ...
	I0507 11:22:10.843907   13268 start.go:360] acquireMachinesLock for old-k8s-version-301000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:10.844144   13268 start.go:364] duration metric: took 198.917µs to acquireMachinesLock for "old-k8s-version-301000"
	I0507 11:22:10.844191   13268 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:22:10.844267   13268 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:22:10.849565   13268 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:22:10.871181   13268 start.go:159] libmachine.API.Create for "old-k8s-version-301000" (driver="qemu2")
	I0507 11:22:10.871212   13268 client.go:168] LocalClient.Create starting
	I0507 11:22:10.871292   13268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:22:10.871330   13268 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:10.871343   13268 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:10.871381   13268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:22:10.871410   13268 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:10.871416   13268 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:10.871809   13268 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:22:11.009333   13268 main.go:141] libmachine: Creating SSH key...
	I0507 11:22:11.194651   13268 main.go:141] libmachine: Creating Disk image...
	I0507 11:22:11.194661   13268 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:22:11.194823   13268 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2
	I0507 11:22:11.207359   13268 main.go:141] libmachine: STDOUT: 
	I0507 11:22:11.207384   13268 main.go:141] libmachine: STDERR: 
	I0507 11:22:11.207431   13268 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2 +20000M
	I0507 11:22:11.218512   13268 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:22:11.218535   13268 main.go:141] libmachine: STDERR: 
	I0507 11:22:11.218558   13268 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2
	I0507 11:22:11.218564   13268 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:22:11.218601   13268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:63:d0:c4:b4:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2
	I0507 11:22:11.220347   13268 main.go:141] libmachine: STDOUT: 
	I0507 11:22:11.220363   13268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:11.220378   13268 client.go:171] duration metric: took 349.173ms to LocalClient.Create
	I0507 11:22:13.222583   13268 start.go:128] duration metric: took 2.378337375s to createHost
	I0507 11:22:13.222665   13268 start.go:83] releasing machines lock for "old-k8s-version-301000", held for 2.378577333s
	W0507 11:22:13.223093   13268 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-301000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-301000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:13.231736   13268 out.go:177] 
	W0507 11:22:13.235774   13268 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:22:13.235843   13268 out.go:239] * 
	* 
	W0507 11:22:13.238506   13268 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:22:13.247715   13268 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-301000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000: exit status 7 (62.614958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-301000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-301000 create -f testdata/busybox.yaml: exit status 1 (30.369042ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-301000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-301000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000: exit status 7 (28.295625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-301000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000: exit status 7 (28.391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-301000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-301000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-301000 describe deploy/metrics-server -n kube-system: exit status 1 (26.767583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-301000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-301000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000: exit status 7 (28.889667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-301000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-301000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.194889625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-301000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-301000" primary control-plane node in "old-k8s-version-301000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:22:16.760035   13324 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:22:16.760184   13324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:16.760187   13324 out.go:304] Setting ErrFile to fd 2...
	I0507 11:22:16.760190   13324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:16.760322   13324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:22:16.761369   13324 out.go:298] Setting JSON to false
	I0507 11:22:16.777346   13324 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6707,"bootTime":1715099429,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:22:16.777425   13324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:22:16.781163   13324 out.go:177] * [old-k8s-version-301000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:22:16.792137   13324 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:22:16.787200   13324 notify.go:220] Checking for updates...
	I0507 11:22:16.800006   13324 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:22:16.803142   13324 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:22:16.806137   13324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:22:16.809156   13324 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:22:16.812150   13324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:22:16.815486   13324 config.go:182] Loaded profile config "old-k8s-version-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0507 11:22:16.819021   13324 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0507 11:22:16.822090   13324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:22:16.826165   13324 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 11:22:16.833134   13324 start.go:297] selected driver: qemu2
	I0507 11:22:16.833140   13324 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:22:16.833199   13324 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:22:16.835594   13324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:22:16.835618   13324 cni.go:84] Creating CNI manager for ""
	I0507 11:22:16.835624   13324 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0507 11:22:16.835650   13324 start.go:340] cluster config:
	{Name:old-k8s-version-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-301000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:22:16.840015   13324 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:16.848034   13324 out.go:177] * Starting "old-k8s-version-301000" primary control-plane node in "old-k8s-version-301000" cluster
	I0507 11:22:16.852118   13324 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0507 11:22:16.852132   13324 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0507 11:22:16.852137   13324 cache.go:56] Caching tarball of preloaded images
	I0507 11:22:16.852189   13324 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:22:16.852193   13324 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0507 11:22:16.852259   13324 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/old-k8s-version-301000/config.json ...
	I0507 11:22:16.852775   13324 start.go:360] acquireMachinesLock for old-k8s-version-301000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:16.852803   13324 start.go:364] duration metric: took 21.167µs to acquireMachinesLock for "old-k8s-version-301000"
	I0507 11:22:16.852811   13324 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:22:16.852815   13324 fix.go:54] fixHost starting: 
	I0507 11:22:16.852921   13324 fix.go:112] recreateIfNeeded on old-k8s-version-301000: state=Stopped err=<nil>
	W0507 11:22:16.852928   13324 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:22:16.857129   13324 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-301000" ...
	I0507 11:22:16.864047   13324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:63:d0:c4:b4:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2
	I0507 11:22:16.866059   13324 main.go:141] libmachine: STDOUT: 
	I0507 11:22:16.866077   13324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:16.866104   13324 fix.go:56] duration metric: took 13.288ms for fixHost
	I0507 11:22:16.866108   13324 start.go:83] releasing machines lock for "old-k8s-version-301000", held for 13.301833ms
	W0507 11:22:16.866115   13324 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:22:16.866157   13324 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:16.866161   13324 start.go:728] Will try again in 5 seconds ...
	I0507 11:22:21.868118   13324 start.go:360] acquireMachinesLock for old-k8s-version-301000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:21.868196   13324 start.go:364] duration metric: took 51.917µs to acquireMachinesLock for "old-k8s-version-301000"
	I0507 11:22:21.868220   13324 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:22:21.868224   13324 fix.go:54] fixHost starting: 
	I0507 11:22:21.868394   13324 fix.go:112] recreateIfNeeded on old-k8s-version-301000: state=Stopped err=<nil>
	W0507 11:22:21.868402   13324 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:22:21.883395   13324 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-301000" ...
	I0507 11:22:21.887608   13324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:63:d0:c4:b4:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/old-k8s-version-301000/disk.qcow2
	I0507 11:22:21.890701   13324 main.go:141] libmachine: STDOUT: 
	I0507 11:22:21.890727   13324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:21.890752   13324 fix.go:56] duration metric: took 22.528542ms for fixHost
	I0507 11:22:21.890757   13324 start.go:83] releasing machines lock for "old-k8s-version-301000", held for 22.553625ms
	W0507 11:22:21.890832   13324 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:21.903566   13324 out.go:177] 
	W0507 11:22:21.907587   13324 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:22:21.907596   13324 out.go:239] * 
	* 
	W0507 11:22:21.908371   13324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:22:21.918522   13324 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-301000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000: exit status 7 (36.7155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-301000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000: exit status 7 (32.370417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-301000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-301000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-301000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.856167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-301000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-301000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000: exit status 7 (39.005542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-301000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000: exit status 7 (28.917459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-301000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-301000 --alsologtostderr -v=1: exit status 83 (47.975208ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-301000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-301000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:22:22.173875   13345 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:22:22.174205   13345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:22.174212   13345 out.go:304] Setting ErrFile to fd 2...
	I0507 11:22:22.174215   13345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:22.174345   13345 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:22:22.174541   13345 out.go:298] Setting JSON to false
	I0507 11:22:22.174551   13345 mustload.go:65] Loading cluster: old-k8s-version-301000
	I0507 11:22:22.174732   13345 config.go:182] Loaded profile config "old-k8s-version-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0507 11:22:22.177442   13345 out.go:177] * The control-plane node old-k8s-version-301000 host is not running: state=Stopped
	I0507 11:22:22.188576   13345 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-301000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-301000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000: exit status 7 (38.657458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-301000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000: exit status 7 (30.578292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-504000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-504000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.669135334s)

                                                
                                                
-- stdout --
	* [no-preload-504000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-504000" primary control-plane node in "no-preload-504000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-504000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:22:22.721600   13370 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:22:22.721738   13370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:22.721741   13370 out.go:304] Setting ErrFile to fd 2...
	I0507 11:22:22.721744   13370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:22.721868   13370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:22:22.722944   13370 out.go:298] Setting JSON to false
	I0507 11:22:22.739487   13370 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6713,"bootTime":1715099429,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:22:22.739584   13370 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:22:22.742572   13370 out.go:177] * [no-preload-504000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:22:22.751601   13370 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:22:22.748689   13370 notify.go:220] Checking for updates...
	I0507 11:22:22.759527   13370 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:22:22.762584   13370 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:22:22.764187   13370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:22:22.767584   13370 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:22:22.770539   13370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:22:22.773949   13370 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:22:22.774010   13370 config.go:182] Loaded profile config "stopped-upgrade-069000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0507 11:22:22.774055   13370 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:22:22.778523   13370 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:22:22.785543   13370 start.go:297] selected driver: qemu2
	I0507 11:22:22.785552   13370 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:22:22.785560   13370 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:22:22.787823   13370 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:22:22.790597   13370 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:22:22.793619   13370 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:22:22.793637   13370 cni.go:84] Creating CNI manager for ""
	I0507 11:22:22.793644   13370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:22:22.793648   13370 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 11:22:22.793685   13370 start.go:340] cluster config:
	{Name:no-preload-504000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:22:22.798154   13370 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:22.805534   13370 out.go:177] * Starting "no-preload-504000" primary control-plane node in "no-preload-504000" cluster
	I0507 11:22:22.809485   13370 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:22:22.809539   13370 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/no-preload-504000/config.json ...
	I0507 11:22:22.809552   13370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/no-preload-504000/config.json: {Name:mk5b9eae091e119be8c877c3faaba1856cb9e7c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:22:22.809571   13370 cache.go:107] acquiring lock: {Name:mk93cab9782caf818e2fce3a23d39a17d84a3524 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:22.809585   13370 cache.go:107] acquiring lock: {Name:mk163457a604919bf95e976f8529faa50a84c24a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:22.809626   13370 cache.go:115] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0507 11:22:22.809631   13370 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 61.417µs
	I0507 11:22:22.809636   13370 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0507 11:22:22.809643   13370 cache.go:107] acquiring lock: {Name:mk05322f6dd069e3c26940b41714b24f167672f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:22.809716   13370 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0507 11:22:22.809746   13370 cache.go:107] acquiring lock: {Name:mk6bd8f9bfb9f6d4425c46d111dadc9546279f32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:22.809766   13370 cache.go:107] acquiring lock: {Name:mk66c87c134423c1b4084b80a18f3a33dda1be5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:22.809775   13370 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0507 11:22:22.809789   13370 start.go:360] acquireMachinesLock for no-preload-504000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:22.809819   13370 start.go:364] duration metric: took 25.084µs to acquireMachinesLock for "no-preload-504000"
	I0507 11:22:22.809572   13370 cache.go:107] acquiring lock: {Name:mk2bef865c1999a4aea1e2d338942911ab96b7c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:22.809829   13370 start.go:93] Provisioning new machine with config: &{Name:no-preload-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:no-preload-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:22:22.809873   13370 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:22:22.809903   13370 cache.go:107] acquiring lock: {Name:mk21e1e6cf1ce9ad369bc03877e354f049ad99c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:22.809927   13370 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0507 11:22:22.809977   13370 cache.go:107] acquiring lock: {Name:mkdcbc54d8c80b71ddd09466b7040ce533d0306f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:22.809989   13370 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0507 11:22:22.810045   13370 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0507 11:22:22.810055   13370 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0507 11:22:22.810056   13370 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0507 11:22:22.818555   13370 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:22:22.821679   13370 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0507 11:22:22.822230   13370 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0507 11:22:22.825436   13370 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0507 11:22:22.826249   13370 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0507 11:22:22.826276   13370 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0507 11:22:22.826361   13370 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0507 11:22:22.826462   13370 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0507 11:22:22.833903   13370 start.go:159] libmachine.API.Create for "no-preload-504000" (driver="qemu2")
	I0507 11:22:22.833925   13370 client.go:168] LocalClient.Create starting
	I0507 11:22:22.833986   13370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:22:22.834016   13370 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:22.834026   13370 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:22.834069   13370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:22:22.834091   13370 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:22.834103   13370 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:22.834431   13370 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:22:22.975159   13370 main.go:141] libmachine: Creating SSH key...
	I0507 11:22:23.029670   13370 main.go:141] libmachine: Creating Disk image...
	I0507 11:22:23.029689   13370 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:22:23.029870   13370 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2
	I0507 11:22:23.042404   13370 main.go:141] libmachine: STDOUT: 
	I0507 11:22:23.042463   13370 main.go:141] libmachine: STDERR: 
	I0507 11:22:23.042550   13370 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2 +20000M
	I0507 11:22:23.055573   13370 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:22:23.055594   13370 main.go:141] libmachine: STDERR: 
	I0507 11:22:23.055615   13370 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2
	I0507 11:22:23.055620   13370 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:22:23.055663   13370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:7f:42:b7:bd:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2
	I0507 11:22:23.057660   13370 main.go:141] libmachine: STDOUT: 
	I0507 11:22:23.057684   13370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:23.057703   13370 client.go:171] duration metric: took 223.779667ms to LocalClient.Create
	I0507 11:22:23.809726   13370 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0507 11:22:23.848324   13370 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0
	I0507 11:22:23.848347   13370 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0
	I0507 11:22:23.855800   13370 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0507 11:22:23.957133   13370 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0507 11:22:23.957157   13370 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.147448625s
	I0507 11:22:23.957176   13370 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0507 11:22:24.038939   13370 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0
	I0507 11:22:24.041873   13370 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0507 11:22:24.071905   13370 cache.go:162] opening:  /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0507 11:22:25.057808   13370 start.go:128] duration metric: took 2.247983833s to createHost
	I0507 11:22:25.057833   13370 start.go:83] releasing machines lock for "no-preload-504000", held for 2.248074666s
	W0507 11:22:25.057864   13370 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:25.067709   13370 out.go:177] * Deleting "no-preload-504000" in qemu2 ...
	W0507 11:22:25.086044   13370 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:25.086057   13370 start.go:728] Will try again in 5 seconds ...
	I0507 11:22:26.467038   13370 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0507 11:22:26.467058   13370 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.657395542s
	I0507 11:22:26.467069   13370 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0507 11:22:26.602730   13370 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0507 11:22:26.602749   13370 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0" took 3.792982875s
	I0507 11:22:26.602758   13370 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0507 11:22:27.336916   13370 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0507 11:22:27.336962   13370 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0" took 4.52752s
	I0507 11:22:27.336976   13370 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0507 11:22:27.500024   13370 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0507 11:22:27.500048   13370 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0" took 4.690224542s
	I0507 11:22:27.500059   13370 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0507 11:22:27.738629   13370 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0507 11:22:27.738657   13370 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0" took 4.929210875s
	I0507 11:22:27.738672   13370 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0507 11:22:30.086021   13370 start.go:360] acquireMachinesLock for no-preload-504000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:30.086239   13370 start.go:364] duration metric: took 183.334µs to acquireMachinesLock for "no-preload-504000"
	I0507 11:22:30.086300   13370 start.go:93] Provisioning new machine with config: &{Name:no-preload-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:no-preload-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:22:30.086383   13370 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:22:30.096066   13370 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:22:30.122240   13370 start.go:159] libmachine.API.Create for "no-preload-504000" (driver="qemu2")
	I0507 11:22:30.122274   13370 client.go:168] LocalClient.Create starting
	I0507 11:22:30.122348   13370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:22:30.122393   13370 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:30.122413   13370 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:30.122463   13370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:22:30.122493   13370 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:30.122504   13370 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:30.122853   13370 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:22:30.261606   13370 main.go:141] libmachine: Creating SSH key...
	I0507 11:22:30.295161   13370 main.go:141] libmachine: Creating Disk image...
	I0507 11:22:30.295167   13370 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:22:30.295340   13370 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2
	I0507 11:22:30.308319   13370 main.go:141] libmachine: STDOUT: 
	I0507 11:22:30.308344   13370 main.go:141] libmachine: STDERR: 
	I0507 11:22:30.308413   13370 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2 +20000M
	I0507 11:22:30.319776   13370 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:22:30.319802   13370 main.go:141] libmachine: STDERR: 
	I0507 11:22:30.319819   13370 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2
	I0507 11:22:30.319830   13370 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:22:30.319866   13370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:1f:bc:1a:14:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2
	I0507 11:22:30.321884   13370 main.go:141] libmachine: STDOUT: 
	I0507 11:22:30.321903   13370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:30.321917   13370 client.go:171] duration metric: took 199.644083ms to LocalClient.Create
	I0507 11:22:31.644531   13370 cache.go:157] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0507 11:22:31.644581   13370 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 8.835190875s
	I0507 11:22:31.644592   13370 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0507 11:22:31.644620   13370 cache.go:87] Successfully saved all images to host disk.
	I0507 11:22:32.324137   13370 start.go:128] duration metric: took 2.2377885s to createHost
	I0507 11:22:32.324228   13370 start.go:83] releasing machines lock for "no-preload-504000", held for 2.238028667s
	W0507 11:22:32.324671   13370 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-504000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-504000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:32.337353   13370 out.go:177] 
	W0507 11:22:32.340409   13370 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:22:32.340436   13370 out.go:239] * 
	* 
	W0507 11:22:32.343008   13370 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:22:32.352188   13370 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-504000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000: exit status 7 (55.934833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-504000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-504000 create -f testdata/busybox.yaml: exit status 1 (29.600834ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-504000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-504000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000: exit status 7 (27.311416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-504000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000: exit status 7 (27.057208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-504000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-504000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-504000 describe deploy/metrics-server -n kube-system: exit status 1 (27.617416ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-504000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-504000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000: exit status 7 (28.650875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-504000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-504000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.189221083s)

                                                
                                                
-- stdout --
	* [no-preload-504000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-504000" primary control-plane node in "no-preload-504000" cluster
	* Restarting existing qemu2 VM for "no-preload-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:22:36.078717   13451 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:22:36.078852   13451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:36.078855   13451 out.go:304] Setting ErrFile to fd 2...
	I0507 11:22:36.078858   13451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:36.078999   13451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:22:36.080013   13451 out.go:298] Setting JSON to false
	I0507 11:22:36.096416   13451 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6727,"bootTime":1715099429,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:22:36.096481   13451 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:22:36.101596   13451 out.go:177] * [no-preload-504000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:22:36.108627   13451 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:22:36.108679   13451 notify.go:220] Checking for updates...
	I0507 11:22:36.112605   13451 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:22:36.116552   13451 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:22:36.119608   13451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:22:36.122551   13451 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:22:36.125588   13451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:22:36.128825   13451 config.go:182] Loaded profile config "no-preload-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:22:36.129076   13451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:22:36.133567   13451 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 11:22:36.140532   13451 start.go:297] selected driver: qemu2
	I0507 11:22:36.140539   13451 start.go:901] validating driver "qemu2" against &{Name:no-preload-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:no-preload-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:22:36.140594   13451 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:22:36.142949   13451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:22:36.142975   13451 cni.go:84] Creating CNI manager for ""
	I0507 11:22:36.142981   13451 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:22:36.143009   13451 start.go:340] cluster config:
	{Name:no-preload-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-504000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:22:36.147102   13451 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:36.154594   13451 out.go:177] * Starting "no-preload-504000" primary control-plane node in "no-preload-504000" cluster
	I0507 11:22:36.158473   13451 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:22:36.158553   13451 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/no-preload-504000/config.json ...
	I0507 11:22:36.158576   13451 cache.go:107] acquiring lock: {Name:mk93cab9782caf818e2fce3a23d39a17d84a3524 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:36.158605   13451 cache.go:107] acquiring lock: {Name:mkdcbc54d8c80b71ddd09466b7040ce533d0306f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:36.158625   13451 cache.go:107] acquiring lock: {Name:mk163457a604919bf95e976f8529faa50a84c24a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:36.158638   13451 cache.go:115] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0507 11:22:36.158642   13451 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 71.458µs
	I0507 11:22:36.158649   13451 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0507 11:22:36.158656   13451 cache.go:107] acquiring lock: {Name:mk6bd8f9bfb9f6d4425c46d111dadc9546279f32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:36.158661   13451 cache.go:115] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0507 11:22:36.158666   13451 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0" took 64.417µs
	I0507 11:22:36.158671   13451 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0507 11:22:36.158679   13451 cache.go:107] acquiring lock: {Name:mk2bef865c1999a4aea1e2d338942911ab96b7c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:36.158692   13451 cache.go:115] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0507 11:22:36.158695   13451 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 39.542µs
	I0507 11:22:36.158697   13451 cache.go:115] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0507 11:22:36.158702   13451 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0" took 115.375µs
	I0507 11:22:36.158708   13451 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0507 11:22:36.158698   13451 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0507 11:22:36.158708   13451 cache.go:107] acquiring lock: {Name:mk21e1e6cf1ce9ad369bc03877e354f049ad99c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:36.158714   13451 cache.go:115] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0507 11:22:36.158755   13451 cache.go:115] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0507 11:22:36.158758   13451 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0" took 55.75µs
	I0507 11:22:36.158762   13451 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0507 11:22:36.158762   13451 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0" took 61.083µs
	I0507 11:22:36.158782   13451 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0507 11:22:36.158785   13451 cache.go:107] acquiring lock: {Name:mk05322f6dd069e3c26940b41714b24f167672f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:36.158806   13451 cache.go:107] acquiring lock: {Name:mk66c87c134423c1b4084b80a18f3a33dda1be5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:36.158830   13451 cache.go:115] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0507 11:22:36.158834   13451 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 71.292µs
	I0507 11:22:36.158840   13451 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0507 11:22:36.158853   13451 cache.go:115] /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0507 11:22:36.158857   13451 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 72.5µs
	I0507 11:22:36.158864   13451 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0507 11:22:36.158867   13451 cache.go:87] Successfully saved all images to host disk.
	I0507 11:22:36.158987   13451 start.go:360] acquireMachinesLock for no-preload-504000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:36.159017   13451 start.go:364] duration metric: took 25.584µs to acquireMachinesLock for "no-preload-504000"
	I0507 11:22:36.159027   13451 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:22:36.159031   13451 fix.go:54] fixHost starting: 
	I0507 11:22:36.159133   13451 fix.go:112] recreateIfNeeded on no-preload-504000: state=Stopped err=<nil>
	W0507 11:22:36.159140   13451 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:22:36.166525   13451 out.go:177] * Restarting existing qemu2 VM for "no-preload-504000" ...
	I0507 11:22:36.169503   13451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:1f:bc:1a:14:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2
	I0507 11:22:36.171459   13451 main.go:141] libmachine: STDOUT: 
	I0507 11:22:36.171480   13451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:36.171509   13451 fix.go:56] duration metric: took 12.477916ms for fixHost
	I0507 11:22:36.171513   13451 start.go:83] releasing machines lock for "no-preload-504000", held for 12.491917ms
	W0507 11:22:36.171519   13451 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:22:36.171551   13451 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:36.171555   13451 start.go:728] Will try again in 5 seconds ...
	I0507 11:22:41.173608   13451 start.go:360] acquireMachinesLock for no-preload-504000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:41.174071   13451 start.go:364] duration metric: took 359.916µs to acquireMachinesLock for "no-preload-504000"
	I0507 11:22:41.174146   13451 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:22:41.174167   13451 fix.go:54] fixHost starting: 
	I0507 11:22:41.174942   13451 fix.go:112] recreateIfNeeded on no-preload-504000: state=Stopped err=<nil>
	W0507 11:22:41.174969   13451 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:22:41.190430   13451 out.go:177] * Restarting existing qemu2 VM for "no-preload-504000" ...
	I0507 11:22:41.194602   13451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:1f:bc:1a:14:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/no-preload-504000/disk.qcow2
	I0507 11:22:41.204024   13451 main.go:141] libmachine: STDOUT: 
	I0507 11:22:41.204120   13451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:41.204221   13451 fix.go:56] duration metric: took 30.05475ms for fixHost
	I0507 11:22:41.204238   13451 start.go:83] releasing machines lock for "no-preload-504000", held for 30.145042ms
	W0507 11:22:41.204452   13451 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-504000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-504000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:41.213365   13451 out.go:177] 
	W0507 11:22:41.216441   13451 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:22:41.216475   13451 out.go:239] * 
	* 
	W0507 11:22:41.219375   13451 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:22:41.228363   13451 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-504000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000: exit status 7 (64.592459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-163000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-163000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.914426667s)

                                                
                                                
-- stdout --
	* [embed-certs-163000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-163000" primary control-plane node in "embed-certs-163000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-163000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:22:38.420066   13462 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:22:38.420204   13462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:38.420208   13462 out.go:304] Setting ErrFile to fd 2...
	I0507 11:22:38.420210   13462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:38.420343   13462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:22:38.421467   13462 out.go:298] Setting JSON to false
	I0507 11:22:38.437429   13462 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6729,"bootTime":1715099429,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:22:38.437500   13462 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:22:38.441969   13462 out.go:177] * [embed-certs-163000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:22:38.448947   13462 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:22:38.448987   13462 notify.go:220] Checking for updates...
	I0507 11:22:38.455796   13462 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:22:38.458925   13462 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:22:38.461944   13462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:22:38.464941   13462 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:22:38.467945   13462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:22:38.471323   13462 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:22:38.471422   13462 config.go:182] Loaded profile config "no-preload-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:22:38.471471   13462 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:22:38.475948   13462 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:22:38.482928   13462 start.go:297] selected driver: qemu2
	I0507 11:22:38.482934   13462 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:22:38.482941   13462 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:22:38.485150   13462 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:22:38.489932   13462 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:22:38.493004   13462 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:22:38.493020   13462 cni.go:84] Creating CNI manager for ""
	I0507 11:22:38.493027   13462 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:22:38.493030   13462 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 11:22:38.493059   13462 start.go:340] cluster config:
	{Name:embed-certs-163000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:22:38.497628   13462 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:38.505946   13462 out.go:177] * Starting "embed-certs-163000" primary control-plane node in "embed-certs-163000" cluster
	I0507 11:22:38.509889   13462 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:22:38.509908   13462 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:22:38.509915   13462 cache.go:56] Caching tarball of preloaded images
	I0507 11:22:38.509984   13462 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:22:38.509990   13462 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:22:38.510057   13462 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/embed-certs-163000/config.json ...
	I0507 11:22:38.510069   13462 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/embed-certs-163000/config.json: {Name:mk66806ac25eb9fed6490f34e54d461be0a3c53c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:22:38.510504   13462 start.go:360] acquireMachinesLock for embed-certs-163000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:38.510537   13462 start.go:364] duration metric: took 27.666µs to acquireMachinesLock for "embed-certs-163000"
	I0507 11:22:38.510548   13462 start.go:93] Provisioning new machine with config: &{Name:embed-certs-163000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:embed-certs-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:22:38.510602   13462 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:22:38.519883   13462 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:22:38.537120   13462 start.go:159] libmachine.API.Create for "embed-certs-163000" (driver="qemu2")
	I0507 11:22:38.537143   13462 client.go:168] LocalClient.Create starting
	I0507 11:22:38.537194   13462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:22:38.537225   13462 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:38.537235   13462 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:38.537273   13462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:22:38.537295   13462 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:38.537303   13462 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:38.537620   13462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:22:38.711491   13462 main.go:141] libmachine: Creating SSH key...
	I0507 11:22:38.846199   13462 main.go:141] libmachine: Creating Disk image...
	I0507 11:22:38.846205   13462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:22:38.846375   13462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2
	I0507 11:22:38.858911   13462 main.go:141] libmachine: STDOUT: 
	I0507 11:22:38.858933   13462 main.go:141] libmachine: STDERR: 
	I0507 11:22:38.858989   13462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2 +20000M
	I0507 11:22:38.870013   13462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:22:38.870029   13462 main.go:141] libmachine: STDERR: 
	I0507 11:22:38.870043   13462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2
	I0507 11:22:38.870046   13462 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:22:38.870082   13462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:35:3a:32:bd:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2
	I0507 11:22:38.871754   13462 main.go:141] libmachine: STDOUT: 
	I0507 11:22:38.871773   13462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:38.871793   13462 client.go:171] duration metric: took 334.652125ms to LocalClient.Create
	I0507 11:22:40.874048   13462 start.go:128] duration metric: took 2.363462875s to createHost
	I0507 11:22:40.874173   13462 start.go:83] releasing machines lock for "embed-certs-163000", held for 2.36369375s
	W0507 11:22:40.874245   13462 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:40.889516   13462 out.go:177] * Deleting "embed-certs-163000" in qemu2 ...
	W0507 11:22:40.918344   13462 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:40.918376   13462 start.go:728] Will try again in 5 seconds ...
	I0507 11:22:45.920470   13462 start.go:360] acquireMachinesLock for embed-certs-163000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:45.920965   13462 start.go:364] duration metric: took 392.709µs to acquireMachinesLock for "embed-certs-163000"
	I0507 11:22:45.921099   13462 start.go:93] Provisioning new machine with config: &{Name:embed-certs-163000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:embed-certs-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:22:45.921364   13462 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:22:45.931041   13462 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:22:45.982578   13462 start.go:159] libmachine.API.Create for "embed-certs-163000" (driver="qemu2")
	I0507 11:22:45.982636   13462 client.go:168] LocalClient.Create starting
	I0507 11:22:45.982780   13462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:22:45.982857   13462 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:45.982882   13462 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:45.982947   13462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:22:45.982990   13462 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:45.983005   13462 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:45.983715   13462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:22:46.135040   13462 main.go:141] libmachine: Creating SSH key...
	I0507 11:22:46.235878   13462 main.go:141] libmachine: Creating Disk image...
	I0507 11:22:46.235883   13462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:22:46.236042   13462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2
	I0507 11:22:46.248574   13462 main.go:141] libmachine: STDOUT: 
	I0507 11:22:46.248597   13462 main.go:141] libmachine: STDERR: 
	I0507 11:22:46.248665   13462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2 +20000M
	I0507 11:22:46.259471   13462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:22:46.259503   13462 main.go:141] libmachine: STDERR: 
	I0507 11:22:46.259514   13462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2
	I0507 11:22:46.259518   13462 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:22:46.259555   13462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:c4:a6:6c:0c:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2
	I0507 11:22:46.261254   13462 main.go:141] libmachine: STDOUT: 
	I0507 11:22:46.261272   13462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:46.261285   13462 client.go:171] duration metric: took 278.653333ms to LocalClient.Create
	I0507 11:22:48.263414   13462 start.go:128] duration metric: took 2.342066375s to createHost
	I0507 11:22:48.263538   13462 start.go:83] releasing machines lock for "embed-certs-163000", held for 2.342550709s
	W0507 11:22:48.263904   13462 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-163000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-163000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:48.272499   13462 out.go:177] 
	W0507 11:22:48.278586   13462 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:22:48.278649   13462 out.go:239] * 
	* 
	W0507 11:22:48.281686   13462 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:22:48.291459   13462 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-163000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000: exit status 7 (63.629875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-163000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-504000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000: exit status 7 (31.012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-504000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-504000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-504000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.655584ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-504000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-504000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000: exit status 7 (28.071166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-504000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000: exit status 7 (28.218041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-504000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-504000 --alsologtostderr -v=1: exit status 83 (39.4185ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-504000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-504000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:22:41.490759   13489 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:22:41.490926   13489 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:41.490929   13489 out.go:304] Setting ErrFile to fd 2...
	I0507 11:22:41.490932   13489 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:41.491058   13489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:22:41.491295   13489 out.go:298] Setting JSON to false
	I0507 11:22:41.491302   13489 mustload.go:65] Loading cluster: no-preload-504000
	I0507 11:22:41.491487   13489 config.go:182] Loaded profile config "no-preload-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:22:41.494963   13489 out.go:177] * The control-plane node no-preload-504000 host is not running: state=Stopped
	I0507 11:22:41.498756   13489 out.go:177]   To start a cluster, run: "minikube start -p no-preload-504000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-504000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000: exit status 7 (27.800875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-504000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000: exit status 7 (27.725708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-991000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-991000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.744079042s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-991000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-991000" primary control-plane node in "default-k8s-diff-port-991000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-991000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:22:42.176952   13526 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:22:42.177084   13526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:42.177088   13526 out.go:304] Setting ErrFile to fd 2...
	I0507 11:22:42.177090   13526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:42.177226   13526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:22:42.178302   13526 out.go:298] Setting JSON to false
	I0507 11:22:42.194429   13526 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6733,"bootTime":1715099429,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:22:42.194500   13526 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:22:42.198969   13526 out.go:177] * [default-k8s-diff-port-991000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:22:42.206084   13526 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:22:42.208987   13526 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:22:42.206129   13526 notify.go:220] Checking for updates...
	I0507 11:22:42.213045   13526 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:22:42.217112   13526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:22:42.220103   13526 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:22:42.223081   13526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:22:42.226487   13526 config.go:182] Loaded profile config "embed-certs-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:22:42.226550   13526 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:22:42.226604   13526 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:22:42.230953   13526 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:22:42.238109   13526 start.go:297] selected driver: qemu2
	I0507 11:22:42.238117   13526 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:22:42.238125   13526 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:22:42.240593   13526 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 11:22:42.243068   13526 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:22:42.246154   13526 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:22:42.246176   13526 cni.go:84] Creating CNI manager for ""
	I0507 11:22:42.246186   13526 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:22:42.246191   13526 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 11:22:42.246236   13526 start.go:340] cluster config:
	{Name:default-k8s-diff-port-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-991000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:22:42.250661   13526 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:42.257991   13526 out.go:177] * Starting "default-k8s-diff-port-991000" primary control-plane node in "default-k8s-diff-port-991000" cluster
	I0507 11:22:42.262094   13526 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:22:42.262108   13526 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:22:42.262116   13526 cache.go:56] Caching tarball of preloaded images
	I0507 11:22:42.262175   13526 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:22:42.262181   13526 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:22:42.262234   13526 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/default-k8s-diff-port-991000/config.json ...
	I0507 11:22:42.262245   13526 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/default-k8s-diff-port-991000/config.json: {Name:mk03b917efefedbf181064b27907db21aa62aec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:22:42.262477   13526 start.go:360] acquireMachinesLock for default-k8s-diff-port-991000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:42.262512   13526 start.go:364] duration metric: took 28.083µs to acquireMachinesLock for "default-k8s-diff-port-991000"
	I0507 11:22:42.262524   13526 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-991000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:22:42.262556   13526 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:22:42.271072   13526 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:22:42.287414   13526 start.go:159] libmachine.API.Create for "default-k8s-diff-port-991000" (driver="qemu2")
	I0507 11:22:42.287441   13526 client.go:168] LocalClient.Create starting
	I0507 11:22:42.287497   13526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:22:42.287525   13526 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:42.287535   13526 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:42.287574   13526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:22:42.287596   13526 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:42.287603   13526 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:42.287977   13526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:22:42.423046   13526 main.go:141] libmachine: Creating SSH key...
	I0507 11:22:42.518983   13526 main.go:141] libmachine: Creating Disk image...
	I0507 11:22:42.518988   13526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:22:42.519134   13526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2
	I0507 11:22:42.531711   13526 main.go:141] libmachine: STDOUT: 
	I0507 11:22:42.531735   13526 main.go:141] libmachine: STDERR: 
	I0507 11:22:42.531796   13526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2 +20000M
	I0507 11:22:42.542925   13526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:22:42.542941   13526 main.go:141] libmachine: STDERR: 
	I0507 11:22:42.542951   13526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2
	I0507 11:22:42.542956   13526 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:22:42.542982   13526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:99:8d:3f:f9:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2
	I0507 11:22:42.544736   13526 main.go:141] libmachine: STDOUT: 
	I0507 11:22:42.544752   13526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:42.544772   13526 client.go:171] duration metric: took 257.333916ms to LocalClient.Create
	I0507 11:22:44.546919   13526 start.go:128] duration metric: took 2.28440725s to createHost
	I0507 11:22:44.546976   13526 start.go:83] releasing machines lock for "default-k8s-diff-port-991000", held for 2.284519958s
	W0507 11:22:44.547112   13526 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:44.560548   13526 out.go:177] * Deleting "default-k8s-diff-port-991000" in qemu2 ...
	W0507 11:22:44.584140   13526 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:44.584173   13526 start.go:728] Will try again in 5 seconds ...
	I0507 11:22:49.586180   13526 start.go:360] acquireMachinesLock for default-k8s-diff-port-991000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:49.586750   13526 start.go:364] duration metric: took 443.709µs to acquireMachinesLock for "default-k8s-diff-port-991000"
	I0507 11:22:49.586960   13526 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-991000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:22:49.587242   13526 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:22:49.593017   13526 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:22:49.642083   13526 start.go:159] libmachine.API.Create for "default-k8s-diff-port-991000" (driver="qemu2")
	I0507 11:22:49.642141   13526 client.go:168] LocalClient.Create starting
	I0507 11:22:49.642242   13526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:22:49.642295   13526 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:49.642317   13526 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:49.642376   13526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:22:49.642413   13526 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:49.642427   13526 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:49.643038   13526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:22:49.798122   13526 main.go:141] libmachine: Creating SSH key...
	I0507 11:22:49.825595   13526 main.go:141] libmachine: Creating Disk image...
	I0507 11:22:49.825604   13526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:22:49.825766   13526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2
	I0507 11:22:49.838447   13526 main.go:141] libmachine: STDOUT: 
	I0507 11:22:49.838466   13526 main.go:141] libmachine: STDERR: 
	I0507 11:22:49.838526   13526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2 +20000M
	I0507 11:22:49.849704   13526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:22:49.849747   13526 main.go:141] libmachine: STDERR: 
	I0507 11:22:49.849766   13526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2
	I0507 11:22:49.849771   13526 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:22:49.849802   13526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:a7:61:ed:33:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2
	I0507 11:22:49.851525   13526 main.go:141] libmachine: STDOUT: 
	I0507 11:22:49.851545   13526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:49.851568   13526 client.go:171] duration metric: took 209.426334ms to LocalClient.Create
	I0507 11:22:51.852802   13526 start.go:128] duration metric: took 2.265607333s to createHost
	I0507 11:22:51.852813   13526 start.go:83] releasing machines lock for "default-k8s-diff-port-991000", held for 2.266078042s
	W0507 11:22:51.852886   13526 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:51.864727   13526 out.go:177] 
	W0507 11:22:51.871692   13526 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:22:51.871699   13526 out.go:239] * 
	* 
	W0507 11:22:51.872207   13526 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:22:51.886699   13526 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-991000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000: exit status 7 (31.385792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-163000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-163000 create -f testdata/busybox.yaml: exit status 1 (30.049ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-163000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-163000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000: exit status 7 (28.12375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-163000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000: exit status 7 (28.113083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-163000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-163000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-163000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-163000 describe deploy/metrics-server -n kube-system: exit status 1 (26.604042ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-163000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-163000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000: exit status 7 (27.846416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-163000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-163000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-163000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.19479075s)

                                                
                                                
-- stdout --
	* [embed-certs-163000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-163000" primary control-plane node in "embed-certs-163000" cluster
	* Restarting existing qemu2 VM for "embed-certs-163000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-163000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:22:51.929197   13579 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:22:51.929329   13579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:51.929333   13579 out.go:304] Setting ErrFile to fd 2...
	I0507 11:22:51.929335   13579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:51.929462   13579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:22:51.930460   13579 out.go:298] Setting JSON to false
	I0507 11:22:51.947833   13579 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6742,"bootTime":1715099429,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:22:51.947943   13579 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:22:51.952753   13579 out.go:177] * [embed-certs-163000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:22:51.964717   13579 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:22:51.960749   13579 notify.go:220] Checking for updates...
	I0507 11:22:51.972638   13579 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:22:51.973962   13579 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:22:51.976711   13579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:22:51.979689   13579 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:22:51.988740   13579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:22:51.992945   13579 config.go:182] Loaded profile config "embed-certs-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:22:51.993194   13579 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:22:51.997576   13579 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 11:22:52.004881   13579 start.go:297] selected driver: qemu2
	I0507 11:22:52.004892   13579 start.go:901] validating driver "qemu2" against &{Name:embed-certs-163000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:embed-certs-163000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:22:52.004950   13579 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:22:52.007634   13579 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:22:52.007676   13579 cni.go:84] Creating CNI manager for ""
	I0507 11:22:52.007683   13579 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:22:52.007708   13579 start.go:340] cluster config:
	{Name:embed-certs-163000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-163000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:22:52.012290   13579 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:52.018737   13579 out.go:177] * Starting "embed-certs-163000" primary control-plane node in "embed-certs-163000" cluster
	I0507 11:22:52.019921   13579 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:22:52.019939   13579 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:22:52.019947   13579 cache.go:56] Caching tarball of preloaded images
	I0507 11:22:52.020032   13579 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:22:52.020038   13579 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:22:52.020092   13579 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/embed-certs-163000/config.json ...
	I0507 11:22:52.020373   13579 start.go:360] acquireMachinesLock for embed-certs-163000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:52.020403   13579 start.go:364] duration metric: took 21.584µs to acquireMachinesLock for "embed-certs-163000"
	I0507 11:22:52.020412   13579 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:22:52.020416   13579 fix.go:54] fixHost starting: 
	I0507 11:22:52.020524   13579 fix.go:112] recreateIfNeeded on embed-certs-163000: state=Stopped err=<nil>
	W0507 11:22:52.020531   13579 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:22:52.024745   13579 out.go:177] * Restarting existing qemu2 VM for "embed-certs-163000" ...
	I0507 11:22:52.031881   13579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:c4:a6:6c:0c:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2
	I0507 11:22:52.034053   13579 main.go:141] libmachine: STDOUT: 
	I0507 11:22:52.034076   13579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:52.034104   13579 fix.go:56] duration metric: took 13.687417ms for fixHost
	I0507 11:22:52.034108   13579 start.go:83] releasing machines lock for "embed-certs-163000", held for 13.70225ms
	W0507 11:22:52.034117   13579 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:22:52.034159   13579 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:52.034164   13579 start.go:728] Will try again in 5 seconds ...
	I0507 11:22:57.036220   13579 start.go:360] acquireMachinesLock for embed-certs-163000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:57.036645   13579 start.go:364] duration metric: took 310.416µs to acquireMachinesLock for "embed-certs-163000"
	I0507 11:22:57.036762   13579 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:22:57.036786   13579 fix.go:54] fixHost starting: 
	I0507 11:22:57.037541   13579 fix.go:112] recreateIfNeeded on embed-certs-163000: state=Stopped err=<nil>
	W0507 11:22:57.037571   13579 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:22:57.046030   13579 out.go:177] * Restarting existing qemu2 VM for "embed-certs-163000" ...
	I0507 11:22:57.050323   13579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:c4:a6:6c:0c:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/embed-certs-163000/disk.qcow2
	I0507 11:22:57.059658   13579 main.go:141] libmachine: STDOUT: 
	I0507 11:22:57.059723   13579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:57.059783   13579 fix.go:56] duration metric: took 23.002333ms for fixHost
	I0507 11:22:57.059796   13579 start.go:83] releasing machines lock for "embed-certs-163000", held for 23.133958ms
	W0507 11:22:57.059981   13579 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-163000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-163000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:57.066884   13579 out.go:177] 
	W0507 11:22:57.071071   13579 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:22:57.071119   13579 out.go:239] * 
	* 
	W0507 11:22:57.073621   13579 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:22:57.081977   13579 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-163000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000: exit status 7 (63.959208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-163000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-991000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-991000 create -f testdata/busybox.yaml: exit status 1 (27.382875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-991000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-991000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000: exit status 7 (34.273292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-991000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000: exit status 7 (29.870625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-991000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-991000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-991000 describe deploy/metrics-server -n kube-system: exit status 1 (27.270916ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-991000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-991000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000: exit status 7 (28.473167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-991000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-991000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.196406333s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-991000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-991000" primary control-plane node in "default-k8s-diff-port-991000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-991000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-991000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:22:55.848311   13621 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:22:55.848437   13621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:55.848439   13621 out.go:304] Setting ErrFile to fd 2...
	I0507 11:22:55.848442   13621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:55.848570   13621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:22:55.849565   13621 out.go:298] Setting JSON to false
	I0507 11:22:55.865329   13621 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6746,"bootTime":1715099429,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:22:55.865392   13621 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:22:55.870457   13621 out.go:177] * [default-k8s-diff-port-991000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:22:55.877650   13621 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:22:55.877722   13621 notify.go:220] Checking for updates...
	I0507 11:22:55.880629   13621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:22:55.884589   13621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:22:55.888563   13621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:22:55.891629   13621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:22:55.894581   13621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:22:55.897920   13621 config.go:182] Loaded profile config "default-k8s-diff-port-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:22:55.898196   13621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:22:55.902614   13621 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 11:22:55.909571   13621 start.go:297] selected driver: qemu2
	I0507 11:22:55.909580   13621 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-991000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:22:55.909658   13621 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:22:55.911937   13621 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 11:22:55.911964   13621 cni.go:84] Creating CNI manager for ""
	I0507 11:22:55.911973   13621 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:22:55.912007   13621 start.go:340] cluster config:
	{Name:default-k8s-diff-port-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-991000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:22:55.916434   13621 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:55.924632   13621 out.go:177] * Starting "default-k8s-diff-port-991000" primary control-plane node in "default-k8s-diff-port-991000" cluster
	I0507 11:22:55.928484   13621 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:22:55.928496   13621 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:22:55.928503   13621 cache.go:56] Caching tarball of preloaded images
	I0507 11:22:55.928552   13621 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:22:55.928557   13621 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:22:55.928613   13621 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/default-k8s-diff-port-991000/config.json ...
	I0507 11:22:55.929115   13621 start.go:360] acquireMachinesLock for default-k8s-diff-port-991000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:55.929142   13621 start.go:364] duration metric: took 21.292µs to acquireMachinesLock for "default-k8s-diff-port-991000"
	I0507 11:22:55.929152   13621 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:22:55.929156   13621 fix.go:54] fixHost starting: 
	I0507 11:22:55.929272   13621 fix.go:112] recreateIfNeeded on default-k8s-diff-port-991000: state=Stopped err=<nil>
	W0507 11:22:55.929281   13621 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:22:55.933536   13621 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-991000" ...
	I0507 11:22:55.941576   13621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:a7:61:ed:33:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2
	I0507 11:22:55.943538   13621 main.go:141] libmachine: STDOUT: 
	I0507 11:22:55.943560   13621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:55.943593   13621 fix.go:56] duration metric: took 14.435875ms for fixHost
	I0507 11:22:55.943597   13621 start.go:83] releasing machines lock for "default-k8s-diff-port-991000", held for 14.451333ms
	W0507 11:22:55.943604   13621 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:22:55.943636   13621 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:22:55.943640   13621 start.go:728] Will try again in 5 seconds ...
	I0507 11:23:00.945766   13621 start.go:360] acquireMachinesLock for default-k8s-diff-port-991000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:23:00.946222   13621 start.go:364] duration metric: took 358.458µs to acquireMachinesLock for "default-k8s-diff-port-991000"
	I0507 11:23:00.946363   13621 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:23:00.946386   13621 fix.go:54] fixHost starting: 
	I0507 11:23:00.947163   13621 fix.go:112] recreateIfNeeded on default-k8s-diff-port-991000: state=Stopped err=<nil>
	W0507 11:23:00.947189   13621 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:23:00.962702   13621 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-991000" ...
	I0507 11:23:00.966820   13621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:a7:61:ed:33:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/default-k8s-diff-port-991000/disk.qcow2
	I0507 11:23:00.977125   13621 main.go:141] libmachine: STDOUT: 
	I0507 11:23:00.977205   13621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:23:00.977294   13621 fix.go:56] duration metric: took 30.908625ms for fixHost
	I0507 11:23:00.977312   13621 start.go:83] releasing machines lock for "default-k8s-diff-port-991000", held for 31.06825ms
	W0507 11:23:00.977538   13621 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-991000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-991000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:23:00.985602   13621 out.go:177] 
	W0507 11:23:00.988642   13621 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:23:00.988667   13621 out.go:239] * 
	* 
	W0507 11:23:00.991111   13621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:23:01.000430   13621 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-991000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000: exit status 7 (68.306667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-163000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000: exit status 7 (31.474708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-163000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-163000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-163000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-163000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.151583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-163000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-163000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000: exit status 7 (27.9425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-163000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-163000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000: exit status 7 (28.059459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-163000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-163000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-163000 --alsologtostderr -v=1: exit status 83 (39.556875ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-163000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-163000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:22:57.342077   13640 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:22:57.342270   13640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:57.342273   13640 out.go:304] Setting ErrFile to fd 2...
	I0507 11:22:57.342275   13640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:57.342414   13640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:22:57.342637   13640 out.go:298] Setting JSON to false
	I0507 11:22:57.342648   13640 mustload.go:65] Loading cluster: embed-certs-163000
	I0507 11:22:57.342835   13640 config.go:182] Loaded profile config "embed-certs-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:22:57.346487   13640 out.go:177] * The control-plane node embed-certs-163000 host is not running: state=Stopped
	I0507 11:22:57.350232   13640 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-163000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-163000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000: exit status 7 (27.816958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-163000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000: exit status 7 (27.865625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-163000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-478000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-478000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.923876125s)

                                                
                                                
-- stdout --
	* [newest-cni-478000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-478000" primary control-plane node in "newest-cni-478000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-478000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:22:57.787385   13663 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:22:57.787572   13663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:57.787575   13663 out.go:304] Setting ErrFile to fd 2...
	I0507 11:22:57.787577   13663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:22:57.787704   13663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:22:57.788806   13663 out.go:298] Setting JSON to false
	I0507 11:22:57.804761   13663 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6748,"bootTime":1715099429,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:22:57.804846   13663 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:22:57.808411   13663 out.go:177] * [newest-cni-478000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:22:57.814289   13663 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:22:57.814368   13663 notify.go:220] Checking for updates...
	I0507 11:22:57.820142   13663 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:22:57.823226   13663 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:22:57.826223   13663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:22:57.827753   13663 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:22:57.831265   13663 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:22:57.834545   13663 config.go:182] Loaded profile config "default-k8s-diff-port-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:22:57.834605   13663 config.go:182] Loaded profile config "multinode-334000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:22:57.834657   13663 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:22:57.839071   13663 out.go:177] * Using the qemu2 driver based on user configuration
	I0507 11:22:57.846253   13663 start.go:297] selected driver: qemu2
	I0507 11:22:57.846261   13663 start.go:901] validating driver "qemu2" against <nil>
	I0507 11:22:57.846268   13663 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:22:57.848508   13663 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0507 11:22:57.848531   13663 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0507 11:22:57.857257   13663 out.go:177] * Automatically selected the socket_vmnet network
	I0507 11:22:57.860358   13663 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0507 11:22:57.860377   13663 cni.go:84] Creating CNI manager for ""
	I0507 11:22:57.860391   13663 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:22:57.860395   13663 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 11:22:57.860426   13663 start.go:340] cluster config:
	{Name:newest-cni-478000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-478000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:22:57.864959   13663 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:22:57.872257   13663 out.go:177] * Starting "newest-cni-478000" primary control-plane node in "newest-cni-478000" cluster
	I0507 11:22:57.876289   13663 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:22:57.876303   13663 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:22:57.876310   13663 cache.go:56] Caching tarball of preloaded images
	I0507 11:22:57.876363   13663 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:22:57.876368   13663 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:22:57.876427   13663 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/newest-cni-478000/config.json ...
	I0507 11:22:57.876438   13663 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/newest-cni-478000/config.json: {Name:mk2758d420690847784d65d378ed5818fad817ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 11:22:57.876887   13663 start.go:360] acquireMachinesLock for newest-cni-478000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:22:57.876923   13663 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "newest-cni-478000"
	I0507 11:22:57.876935   13663 start.go:93] Provisioning new machine with config: &{Name:newest-cni-478000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:newest-cni-478000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:22:57.876979   13663 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:22:57.886319   13663 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:22:57.904457   13663 start.go:159] libmachine.API.Create for "newest-cni-478000" (driver="qemu2")
	I0507 11:22:57.904488   13663 client.go:168] LocalClient.Create starting
	I0507 11:22:57.904556   13663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:22:57.904585   13663 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:57.904600   13663 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:57.904644   13663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:22:57.904668   13663 main.go:141] libmachine: Decoding PEM data...
	I0507 11:22:57.904674   13663 main.go:141] libmachine: Parsing certificate...
	I0507 11:22:57.905183   13663 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:22:58.037725   13663 main.go:141] libmachine: Creating SSH key...
	I0507 11:22:58.133506   13663 main.go:141] libmachine: Creating Disk image...
	I0507 11:22:58.133511   13663 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:22:58.133675   13663 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2
	I0507 11:22:58.146350   13663 main.go:141] libmachine: STDOUT: 
	I0507 11:22:58.146369   13663 main.go:141] libmachine: STDERR: 
	I0507 11:22:58.146414   13663 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2 +20000M
	I0507 11:22:58.157326   13663 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:22:58.157349   13663 main.go:141] libmachine: STDERR: 
	I0507 11:22:58.157372   13663 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2
	I0507 11:22:58.157376   13663 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:22:58.157407   13663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:6f:7f:1a:a0:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2
	I0507 11:22:58.159201   13663 main.go:141] libmachine: STDOUT: 
	I0507 11:22:58.159216   13663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:22:58.159235   13663 client.go:171] duration metric: took 254.749292ms to LocalClient.Create
	I0507 11:23:00.161363   13663 start.go:128] duration metric: took 2.284426792s to createHost
	I0507 11:23:00.161438   13663 start.go:83] releasing machines lock for "newest-cni-478000", held for 2.284572209s
	W0507 11:23:00.161504   13663 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:23:00.168815   13663 out.go:177] * Deleting "newest-cni-478000" in qemu2 ...
	W0507 11:23:00.199124   13663 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:23:00.199179   13663 start.go:728] Will try again in 5 seconds ...
	I0507 11:23:05.199752   13663 start.go:360] acquireMachinesLock for newest-cni-478000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:23:05.200229   13663 start.go:364] duration metric: took 370.25µs to acquireMachinesLock for "newest-cni-478000"
	I0507 11:23:05.200387   13663 start.go:93] Provisioning new machine with config: &{Name:newest-cni-478000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:newest-cni-478000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 11:23:05.200662   13663 start.go:125] createHost starting for "" (driver="qemu2")
	I0507 11:23:05.210345   13663 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 11:23:05.259794   13663 start.go:159] libmachine.API.Create for "newest-cni-478000" (driver="qemu2")
	I0507 11:23:05.259856   13663 client.go:168] LocalClient.Create starting
	I0507 11:23:05.259986   13663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/ca.pem
	I0507 11:23:05.260043   13663 main.go:141] libmachine: Decoding PEM data...
	I0507 11:23:05.260062   13663 main.go:141] libmachine: Parsing certificate...
	I0507 11:23:05.260137   13663 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18804-8175/.minikube/certs/cert.pem
	I0507 11:23:05.260181   13663 main.go:141] libmachine: Decoding PEM data...
	I0507 11:23:05.260192   13663 main.go:141] libmachine: Parsing certificate...
	I0507 11:23:05.260715   13663 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0507 11:23:05.404568   13663 main.go:141] libmachine: Creating SSH key...
	I0507 11:23:05.610544   13663 main.go:141] libmachine: Creating Disk image...
	I0507 11:23:05.610551   13663 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0507 11:23:05.610736   13663 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2.raw /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2
	I0507 11:23:05.624179   13663 main.go:141] libmachine: STDOUT: 
	I0507 11:23:05.624200   13663 main.go:141] libmachine: STDERR: 
	I0507 11:23:05.624249   13663 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2 +20000M
	I0507 11:23:05.635280   13663 main.go:141] libmachine: STDOUT: Image resized.
	
	I0507 11:23:05.635294   13663 main.go:141] libmachine: STDERR: 
	I0507 11:23:05.635305   13663 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2
	I0507 11:23:05.635310   13663 main.go:141] libmachine: Starting QEMU VM...
	I0507 11:23:05.635353   13663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:75:24:37:f6:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2
	I0507 11:23:05.637178   13663 main.go:141] libmachine: STDOUT: 
	I0507 11:23:05.637204   13663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:23:05.637217   13663 client.go:171] duration metric: took 377.366083ms to LocalClient.Create
	I0507 11:23:07.639418   13663 start.go:128] duration metric: took 2.438710459s to createHost
	I0507 11:23:07.639518   13663 start.go:83] releasing machines lock for "newest-cni-478000", held for 2.43933475s
	W0507 11:23:07.639848   13663 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-478000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-478000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:23:07.652497   13663 out.go:177] 
	W0507 11:23:07.656647   13663 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:23:07.656672   13663 out.go:239] * 
	* 
	W0507 11:23:07.659267   13663 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:23:07.671409   13663 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-478000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-478000 -n newest-cni-478000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-478000 -n newest-cni-478000: exit status 7 (66.813167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-478000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-991000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000: exit status 7 (31.578083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-991000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-991000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-991000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.691ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-991000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-991000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000: exit status 7 (28.3155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-991000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000: exit status 7 (27.999958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-991000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-991000 --alsologtostderr -v=1: exit status 83 (40.540292ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-991000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-991000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:23:01.267590   13685 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:23:01.267745   13685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:23:01.267749   13685 out.go:304] Setting ErrFile to fd 2...
	I0507 11:23:01.267751   13685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:23:01.267877   13685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:23:01.268099   13685 out.go:298] Setting JSON to false
	I0507 11:23:01.268105   13685 mustload.go:65] Loading cluster: default-k8s-diff-port-991000
	I0507 11:23:01.268286   13685 config.go:182] Loaded profile config "default-k8s-diff-port-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:23:01.273099   13685 out.go:177] * The control-plane node default-k8s-diff-port-991000 host is not running: state=Stopped
	I0507 11:23:01.277168   13685 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-991000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-991000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000: exit status 7 (27.727292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-991000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000: exit status 7 (27.819875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-478000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-478000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.180312125s)

                                                
                                                
-- stdout --
	* [newest-cni-478000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-478000" primary control-plane node in "newest-cni-478000" cluster
	* Restarting existing qemu2 VM for "newest-cni-478000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-478000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:23:11.054009   13742 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:23:11.054161   13742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:23:11.054164   13742 out.go:304] Setting ErrFile to fd 2...
	I0507 11:23:11.054166   13742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:23:11.054292   13742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:23:11.055289   13742 out.go:298] Setting JSON to false
	I0507 11:23:11.071141   13742 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6762,"bootTime":1715099429,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 11:23:11.071204   13742 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 11:23:11.076478   13742 out.go:177] * [newest-cni-478000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 11:23:11.083551   13742 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 11:23:11.083626   13742 notify.go:220] Checking for updates...
	I0507 11:23:11.087537   13742 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 11:23:11.090478   13742 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 11:23:11.093544   13742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 11:23:11.096418   13742 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 11:23:11.099518   13742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 11:23:11.102839   13742 config.go:182] Loaded profile config "newest-cni-478000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:23:11.103099   13742 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 11:23:11.107499   13742 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 11:23:11.114461   13742 start.go:297] selected driver: qemu2
	I0507 11:23:11.114468   13742 start.go:901] validating driver "qemu2" against &{Name:newest-cni-478000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:newest-cni-478000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:23:11.114522   13742 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 11:23:11.116795   13742 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0507 11:23:11.116818   13742 cni.go:84] Creating CNI manager for ""
	I0507 11:23:11.116825   13742 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 11:23:11.116849   13742 start.go:340] cluster config:
	{Name:newest-cni-478000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-478000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 11:23:11.121021   13742 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 11:23:11.129527   13742 out.go:177] * Starting "newest-cni-478000" primary control-plane node in "newest-cni-478000" cluster
	I0507 11:23:11.133428   13742 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 11:23:11.133444   13742 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 11:23:11.133456   13742 cache.go:56] Caching tarball of preloaded images
	I0507 11:23:11.133520   13742 preload.go:173] Found /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0507 11:23:11.133525   13742 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 11:23:11.133590   13742 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/newest-cni-478000/config.json ...
	I0507 11:23:11.134138   13742 start.go:360] acquireMachinesLock for newest-cni-478000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:23:11.134167   13742 start.go:364] duration metric: took 22.75µs to acquireMachinesLock for "newest-cni-478000"
	I0507 11:23:11.134177   13742 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:23:11.134183   13742 fix.go:54] fixHost starting: 
	I0507 11:23:11.134298   13742 fix.go:112] recreateIfNeeded on newest-cni-478000: state=Stopped err=<nil>
	W0507 11:23:11.134307   13742 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:23:11.138510   13742 out.go:177] * Restarting existing qemu2 VM for "newest-cni-478000" ...
	I0507 11:23:11.146480   13742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:75:24:37:f6:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2
	I0507 11:23:11.148580   13742 main.go:141] libmachine: STDOUT: 
	I0507 11:23:11.148600   13742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:23:11.148632   13742 fix.go:56] duration metric: took 14.449125ms for fixHost
	I0507 11:23:11.148636   13742 start.go:83] releasing machines lock for "newest-cni-478000", held for 14.46525ms
	W0507 11:23:11.148644   13742 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:23:11.148687   13742 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:23:11.148692   13742 start.go:728] Will try again in 5 seconds ...
	I0507 11:23:16.150741   13742 start.go:360] acquireMachinesLock for newest-cni-478000: {Name:mk5872765ae14482071e454e2d8e443dc2af9c90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 11:23:16.151107   13742 start.go:364] duration metric: took 275.417µs to acquireMachinesLock for "newest-cni-478000"
	I0507 11:23:16.151243   13742 start.go:96] Skipping create...Using existing machine configuration
	I0507 11:23:16.151262   13742 fix.go:54] fixHost starting: 
	I0507 11:23:16.151968   13742 fix.go:112] recreateIfNeeded on newest-cni-478000: state=Stopped err=<nil>
	W0507 11:23:16.152012   13742 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 11:23:16.155405   13742 out.go:177] * Restarting existing qemu2 VM for "newest-cni-478000" ...
	I0507 11:23:16.162613   13742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:75:24:37:f6:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18804-8175/.minikube/machines/newest-cni-478000/disk.qcow2
	I0507 11:23:16.171433   13742 main.go:141] libmachine: STDOUT: 
	I0507 11:23:16.171515   13742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0507 11:23:16.171591   13742 fix.go:56] duration metric: took 20.329667ms for fixHost
	I0507 11:23:16.171610   13742 start.go:83] releasing machines lock for "newest-cni-478000", held for 20.479458ms
	W0507 11:23:16.171785   13742 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-478000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-478000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0507 11:23:16.179374   13742 out.go:177] 
	W0507 11:23:16.183216   13742 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0507 11:23:16.183240   13742 out.go:239] * 
	* 
	W0507 11:23:16.185909   13742 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 11:23:16.193534   13742 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-478000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-478000 -n newest-cni-478000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-478000 -n newest-cni-478000: exit status 7 (66.28375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-478000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-478000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-478000 -n newest-cni-478000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-478000 -n newest-cni-478000: exit status 7 (29.00625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-478000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-478000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-478000 --alsologtostderr -v=1: exit status 83 (42.502333ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-478000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-478000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 11:23:16.374478   13756 out.go:291] Setting OutFile to fd 1 ...
	I0507 11:23:16.374636   13756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:23:16.374640   13756 out.go:304] Setting ErrFile to fd 2...
	I0507 11:23:16.374642   13756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 11:23:16.374784   13756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 11:23:16.375015   13756 out.go:298] Setting JSON to false
	I0507 11:23:16.375022   13756 mustload.go:65] Loading cluster: newest-cni-478000
	I0507 11:23:16.375229   13756 config.go:182] Loaded profile config "newest-cni-478000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 11:23:16.379745   13756 out.go:177] * The control-plane node newest-cni-478000 host is not running: state=Stopped
	I0507 11:23:16.383806   13756 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-478000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-478000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-478000 -n newest-cni-478000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-478000 -n newest-cni-478000: exit status 7 (29.602125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-478000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-478000 -n newest-cni-478000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-478000 -n newest-cni-478000: exit status 7 (29.31025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-478000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.30.0/json-events 7.53
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.09
18 TestDownloadOnly/v1.30.0/DeleteAll 0.22
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.33
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.28
39 TestErrorSpam/start 0.38
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.13
42 TestErrorSpam/unpause 0.11
43 TestErrorSpam/stop 10.03
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.38
55 TestFunctional/serial/CacheCmd/cache/add_local 1.2
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.23
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.12
93 TestFunctional/parallel/License 0.4
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 2.06
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.12
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
126 TestFunctional/parallel/ProfileCmd/profile_list 0.1
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.1
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.17
136 TestFunctional/delete_my-image_image 0.04
137 TestFunctional/delete_minikube_cached_images 0.04
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.32
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 1.06
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.54
258 TestNoKubernetes/serial/Stop 1.89
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.66
275 TestStartStop/group/old-k8s-version/serial/Stop 3.12
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
286 TestStartStop/group/no-preload/serial/Stop 3.32
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
299 TestStartStop/group/embed-certs/serial/Stop 3.19
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.57
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 3.09
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-931000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-931000: exit status 85 (95.623875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-931000 | jenkins | v1.33.0 | 07 May 24 10:56 PDT |          |
	|         | -p download-only-931000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/07 10:56:59
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 10:56:59.853357    9424 out.go:291] Setting OutFile to fd 1 ...
	I0507 10:56:59.853516    9424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:56:59.853520    9424 out.go:304] Setting ErrFile to fd 2...
	I0507 10:56:59.853522    9424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:56:59.853666    9424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	W0507 10:56:59.853757    9424 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18804-8175/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18804-8175/.minikube/config/config.json: no such file or directory
	I0507 10:56:59.855067    9424 out.go:298] Setting JSON to true
	I0507 10:56:59.872590    9424 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5190,"bootTime":1715099429,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 10:56:59.872654    9424 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 10:56:59.875884    9424 out.go:97] [download-only-931000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 10:56:59.880143    9424 out.go:169] MINIKUBE_LOCATION=18804
	I0507 10:56:59.876061    9424 notify.go:220] Checking for updates...
	W0507 10:56:59.876120    9424 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball: no such file or directory
	I0507 10:56:59.886984    9424 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 10:56:59.890190    9424 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 10:56:59.893537    9424 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 10:56:59.895141    9424 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	W0507 10:56:59.901409    9424 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0507 10:56:59.901626    9424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 10:56:59.905120    9424 out.go:97] Using the qemu2 driver based on user configuration
	I0507 10:56:59.905137    9424 start.go:297] selected driver: qemu2
	I0507 10:56:59.905151    9424 start.go:901] validating driver "qemu2" against <nil>
	I0507 10:56:59.905211    9424 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 10:56:59.908032    9424 out.go:169] Automatically selected the socket_vmnet network
	I0507 10:56:59.913345    9424 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0507 10:56:59.913444    9424 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0507 10:56:59.913472    9424 cni.go:84] Creating CNI manager for ""
	I0507 10:56:59.913492    9424 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0507 10:56:59.913551    9424 start.go:340] cluster config:
	{Name:download-only-931000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-931000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 10:56:59.917982    9424 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 10:56:59.921482    9424 out.go:97] Downloading VM boot image ...
	I0507 10:56:59.921507    9424 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso
	I0507 10:57:05.088110    9424 out.go:97] Starting "download-only-931000" primary control-plane node in "download-only-931000" cluster
	I0507 10:57:05.088132    9424 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0507 10:57:05.148212    9424 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0507 10:57:05.148217    9424 cache.go:56] Caching tarball of preloaded images
	I0507 10:57:05.148389    9424 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0507 10:57:05.153267    9424 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0507 10:57:05.153274    9424 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0507 10:57:05.237144    9424 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0507 10:57:12.518787    9424 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0507 10:57:12.518960    9424 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0507 10:57:13.217097    9424 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0507 10:57:13.217296    9424 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/download-only-931000/config.json ...
	I0507 10:57:13.217316    9424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18804-8175/.minikube/profiles/download-only-931000/config.json: {Name:mkde7b5a354249061a21034a86d309e14beb0a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 10:57:13.218786    9424 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0507 10:57:13.218967    9424 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0507 10:57:13.571734    9424 out.go:169] 
	W0507 10:57:13.577853    9424 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18804-8175/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107090e00 0x107090e00 0x107090e00 0x107090e00 0x107090e00 0x107090e00 0x107090e00] Decompressors:map[bz2:0x140006d3ae0 gz:0x140006d3ae8 tar:0x140006d3a80 tar.bz2:0x140006d3a90 tar.gz:0x140006d3ab0 tar.xz:0x140006d3ac0 tar.zst:0x140006d3ad0 tbz2:0x140006d3a90 tgz:0x140006d3ab0 txz:0x140006d3ac0 tzst:0x140006d3ad0 xz:0x140006d3d90 zip:0x140006d3dc0 zst:0x140006d3d98] Getters:map[file:0x140015c8880 http:0x140005c8230 https:0x140005c8280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0507 10:57:13.577878    9424 out_reason.go:110] 
	W0507 10:57:13.586717    9424 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0507 10:57:13.590790    9424 out.go:169] 
	
	
	* The control-plane node download-only-931000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-931000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-931000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (7.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-879000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-879000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=qemu2 : (7.530730708s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (7.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-879000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-879000: exit status 85 (86.270708ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-931000 | jenkins | v1.33.0 | 07 May 24 10:56 PDT |                     |
	|         | -p download-only-931000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	| delete  | -p download-only-931000        | download-only-931000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT | 07 May 24 10:57 PDT |
	| start   | -o=json --download-only        | download-only-879000 | jenkins | v1.33.0 | 07 May 24 10:57 PDT |                     |
	|         | -p download-only-879000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/07 10:57:14
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 10:57:14.248795    9466 out.go:291] Setting OutFile to fd 1 ...
	I0507 10:57:14.248931    9466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:57:14.248935    9466 out.go:304] Setting ErrFile to fd 2...
	I0507 10:57:14.248937    9466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:57:14.249062    9466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 10:57:14.250127    9466 out.go:298] Setting JSON to true
	I0507 10:57:14.266451    9466 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5205,"bootTime":1715099429,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 10:57:14.266551    9466 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 10:57:14.269929    9466 out.go:97] [download-only-879000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 10:57:14.273729    9466 out.go:169] MINIKUBE_LOCATION=18804
	I0507 10:57:14.270003    9466 notify.go:220] Checking for updates...
	I0507 10:57:14.280903    9466 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 10:57:14.282616    9466 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 10:57:14.285930    9466 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 10:57:14.288929    9466 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	W0507 10:57:14.294856    9466 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0507 10:57:14.295051    9466 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 10:57:14.297884    9466 out.go:97] Using the qemu2 driver based on user configuration
	I0507 10:57:14.297894    9466 start.go:297] selected driver: qemu2
	I0507 10:57:14.297898    9466 start.go:901] validating driver "qemu2" against <nil>
	I0507 10:57:14.297962    9466 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 10:57:14.300907    9466 out.go:169] Automatically selected the socket_vmnet network
	I0507 10:57:14.306925    9466 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0507 10:57:14.307048    9466 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0507 10:57:14.307067    9466 cni.go:84] Creating CNI manager for ""
	I0507 10:57:14.307074    9466 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 10:57:14.307079    9466 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 10:57:14.307114    9466 start.go:340] cluster config:
	{Name:download-only-879000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 10:57:14.311563    9466 iso.go:125] acquiring lock: {Name:mk59707429daf439ac3f0a5a567aed81a07daf90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 10:57:14.315967    9466 out.go:97] Starting "download-only-879000" primary control-plane node in "download-only-879000" cluster
	I0507 10:57:14.315976    9466 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 10:57:14.369833    9466 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0507 10:57:14.369863    9466 cache.go:56] Caching tarball of preloaded images
	I0507 10:57:14.370212    9466 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 10:57:14.373796    9466 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0507 10:57:14.373803    9466 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0507 10:57:14.849580    9466 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4?checksum=md5:677034533668c42fec962cc52f9b3c42 -> /Users/jenkins/minikube-integration/18804-8175/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-879000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-879000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-879000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.33s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-067000 --alsologtostderr --binary-mirror http://127.0.0.1:51025 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-067000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-067000
--- PASS: TestBinaryMirror (0.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-189000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-189000: exit status 85 (56.034208ms)

                                                
                                                
-- stdout --
	* Profile "addons-189000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-189000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-189000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-189000: exit status 85 (58.672625ms)

                                                
                                                
-- stdout --
	* Profile "addons-189000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-189000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.28s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.28s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 status: exit status 7 (31.241583ms)

                                                
                                                
-- stdout --
	nospam-636000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 status: exit status 7 (28.951875ms)

                                                
                                                
-- stdout --
	nospam-636000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 status: exit status 7 (28.795584ms)

                                                
                                                
-- stdout --
	nospam-636000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 pause: exit status 83 (47.028125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-636000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-636000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 pause: exit status 83 (39.987542ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-636000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-636000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 pause: exit status 83 (41.628333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-636000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-636000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.11s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 unpause: exit status 83 (36.804667ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-636000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-636000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 unpause: exit status 83 (37.904583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-636000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-636000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 unpause: exit status 83 (38.789417ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-636000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-636000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.11s)

                                                
                                    
x
+
TestErrorSpam/stop (10.03s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 stop: (3.568057292s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 stop: (3.096524042s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-636000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-636000 stop: (3.364837125s)
--- PASS: TestErrorSpam/stop (10.03s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18804-8175/.minikube/files/etc/test/nested/copy/9422/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-642000 cache add registry.k8s.io/pause:3.1: (1.191513125s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-642000 cache add registry.k8s.io/pause:3.3: (1.198039125s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-642000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local279804550/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 cache add minikube-local-cache-test:functional-642000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 cache delete minikube-local-cache-test:functional-642000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-642000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 config get cpus: exit status 14 (28.99825ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 config get cpus: exit status 14 (35.96325ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-642000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-642000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (161.881375ms)

                                                
                                                
-- stdout --
	* [functional-642000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 10:59:08.022394   10183 out.go:291] Setting OutFile to fd 1 ...
	I0507 10:59:08.022565   10183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:59:08.022569   10183 out.go:304] Setting ErrFile to fd 2...
	I0507 10:59:08.022572   10183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:59:08.022728   10183 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 10:59:08.024043   10183 out.go:298] Setting JSON to false
	I0507 10:59:08.044098   10183 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5319,"bootTime":1715099429,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 10:59:08.044172   10183 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 10:59:08.050031   10183 out.go:177] * [functional-642000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0507 10:59:08.057996   10183 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 10:59:08.058036   10183 notify.go:220] Checking for updates...
	I0507 10:59:08.064878   10183 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 10:59:08.067954   10183 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 10:59:08.070918   10183 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 10:59:08.073941   10183 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 10:59:08.076950   10183 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 10:59:08.078392   10183 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 10:59:08.078690   10183 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 10:59:08.082952   10183 out.go:177] * Using the qemu2 driver based on existing profile
	I0507 10:59:08.089791   10183 start.go:297] selected driver: qemu2
	I0507 10:59:08.089796   10183 start.go:901] validating driver "qemu2" against &{Name:functional-642000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-642000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 10:59:08.089840   10183 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 10:59:08.096853   10183 out.go:177] 
	W0507 10:59:08.100988   10183 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0507 10:59:08.104889   10183 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-642000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-642000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-642000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.971125ms)

                                                
                                                
-- stdout --
	* [functional-642000] minikube v1.33.0 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 10:59:08.251605   10194 out.go:291] Setting OutFile to fd 1 ...
	I0507 10:59:08.251765   10194 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:59:08.251769   10194 out.go:304] Setting ErrFile to fd 2...
	I0507 10:59:08.251771   10194 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 10:59:08.251901   10194 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18804-8175/.minikube/bin
	I0507 10:59:08.253270   10194 out.go:298] Setting JSON to false
	I0507 10:59:08.269902   10194 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5319,"bootTime":1715099429,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0507 10:59:08.269976   10194 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 10:59:08.273914   10194 out.go:177] * [functional-642000] minikube v1.33.0 sur Darwin 14.4.1 (arm64)
	I0507 10:59:08.280919   10194 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 10:59:08.284951   10194 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	I0507 10:59:08.280960   10194 notify.go:220] Checking for updates...
	I0507 10:59:08.290355   10194 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0507 10:59:08.292939   10194 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 10:59:08.295980   10194 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	I0507 10:59:08.298994   10194 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 10:59:08.302268   10194 config.go:182] Loaded profile config "functional-642000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 10:59:08.302524   10194 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 10:59:08.306914   10194 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0507 10:59:08.313905   10194 start.go:297] selected driver: qemu2
	I0507 10:59:08.313910   10194 start.go:901] validating driver "qemu2" against &{Name:functional-642000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-642000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 10:59:08.313984   10194 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 10:59:08.320901   10194 out.go:177] 
	W0507 10:59:08.324942   10194 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0507 10:59:08.328905   10194 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.024901958s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-642000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-642000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image rm gcr.io/google-containers/addon-resizer:functional-642000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-642000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 image save --daemon gcr.io/google-containers/addon-resizer:functional-642000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-642000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "68.3355ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "32.908041ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "67.8535ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.698542ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012780875s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-642000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-642000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-642000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-642000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-845000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-845000 --output=json --user=testUser: (2.998958833s)
--- PASS: TestJSONOutput/stop/Command (3.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-231000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-231000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.94675ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5b7b2f88-8db1-4384-a52c-7a22501d2115","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-231000] minikube v1.33.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"be47f7c0-bedb-4cf7-befa-4e506ff1ebb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18804"}}
	{"specversion":"1.0","id":"c5f45a9d-b0d9-4bda-a642-708d105893c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig"}}
	{"specversion":"1.0","id":"0b279150-0450-4a85-94f7-84c6c75cc7b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"57c97f9c-2b9c-40a7-ad41-daf024ac2cdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4488ad12-9a10-4e70-8a8d-ef7b279cc0ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube"}}
	{"specversion":"1.0","id":"4d8f4ac7-0a70-4509-96fb-6ab963327b99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4f690cb7-5059-4ab8-a14b-1ff27b3ac3d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-231000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-231000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-274000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-274000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (104.743833ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-274000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18804
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18804-8175/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18804-8175/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-274000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-274000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.50425ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-274000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-274000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.735078458s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.805806583s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-274000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-274000: (1.893155667s)
--- PASS: TestNoKubernetes/serial/Stop (1.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-274000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-274000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.5235ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-274000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-274000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-069000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-301000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-301000 --alsologtostderr -v=3: (3.115915s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-301000 -n old-k8s-version-301000: exit status 7 (32.500875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-301000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-504000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-504000 --alsologtostderr -v=3: (3.324036209s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-504000 -n no-preload-504000: exit status 7 (50.192667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-504000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-163000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-163000 --alsologtostderr -v=3: (3.185064s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-163000 -n embed-certs-163000: exit status 7 (55.812292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-163000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-991000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-991000 --alsologtostderr -v=3: (3.572874083s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-991000 -n default-k8s-diff-port-991000: exit status 7 (52.695834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-991000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-478000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-478000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-478000 --alsologtostderr -v=3: (3.0939515s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-478000 -n newest-cni-478000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-478000 -n newest-cni-478000: exit status 7 (53.749541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-478000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-642000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3422849052/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1715104711214160000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3422849052/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1715104711214160000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3422849052/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1715104711214160000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3422849052/001/test-1715104711214160000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (53.398458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.80675ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.621625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.917708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.995583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.844083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.670792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "sudo umount -f /mount-9p": exit status 83 (47.648791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-642000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-642000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3422849052/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (9.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-642000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2821273918/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.1675ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.686625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.614375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.411ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.007708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.274292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.729292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "sudo umount -f /mount-9p": exit status 83 (46.764791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-642000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-642000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2821273918/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (14.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-642000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2893344076/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-642000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2893344076/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-642000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2893344076/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1: exit status 83 (84.888125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1: exit status 83 (85.783167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1: exit status 83 (87.91ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1: exit status 83 (84.546917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1: exit status 83 (85.171875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1: exit status 83 (84.390917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-642000 ssh "findmnt -T" /mount1: exit status 83 (79.431791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-642000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-642000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-642000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2893344076/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-642000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2893344076/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-642000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2893344076/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (14.78s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-359000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-359000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-359000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-359000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359000"

                                                
                                                
----------------------- debugLogs end: cilium-359000 [took: 2.155938084s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-359000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-359000
--- SKIP: TestNetworkPlugins/group/cilium (2.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-914000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-914000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard