Test Report: QEMU_macOS 18872

                    
                      e5a45a5ea9a7bb508c00b9c70a33890e15fde7d2:2024-05-13:34460
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.65
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.07
27 TestAddons/Setup 10.46
28 TestCertOptions 10.09
29 TestCertExpiration 195.46
30 TestDockerFlags 10.19
31 TestForceSystemdFlag 10.03
32 TestForceSystemdEnv 12
38 TestErrorSpam/setup 10.02
47 TestFunctional/serial/StartWithProxy 9.95
49 TestFunctional/serial/SoftStart 5.25
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 0.63
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.95
63 TestFunctional/serial/ExtraConfig 5.24
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.07
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.19
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.26
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.27
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
102 TestFunctional/parallel/DockerEnv/bash 0.04
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.04
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
110 TestFunctional/parallel/ServiceCmd/Format 0.04
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 86.9
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.33
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.85
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 34.67
141 TestMultiControlPlane/serial/StartCluster 10.13
142 TestMultiControlPlane/serial/DeployApp 112.46
143 TestMultiControlPlane/serial/PingHostFromPods 0.08
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
150 TestMultiControlPlane/serial/RestartSecondaryNode 43.38
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.28
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
155 TestMultiControlPlane/serial/StopCluster 3.62
156 TestMultiControlPlane/serial/RestartCluster 5.24
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.1
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.1
162 TestImageBuild/serial/Setup 10.13
165 TestJSONOutput/start/Command 9.75
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.36
197 TestMountStart/serial/StartWithMountFirst 10.09
200 TestMultiNode/serial/FreshStart2Nodes 9.88
201 TestMultiNode/serial/DeployApp2Nodes 101.24
202 TestMultiNode/serial/PingHostFrom2Pods 0.08
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.1
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 44.79
209 TestMultiNode/serial/RestartKeepsNodes 8.64
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.49
212 TestMultiNode/serial/RestartMultiNode 5.25
213 TestMultiNode/serial/ValidateNameConflict 21.4
217 TestPreload 10.19
219 TestScheduledStopUnix 10.18
220 TestSkaffold 12.27
223 TestRunningBinaryUpgrade 615.68
225 TestKubernetesUpgrade 18.74
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.38
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.96
241 TestStoppedBinaryUpgrade/Upgrade 564.96
243 TestPause/serial/Start 9.78
253 TestNoKubernetes/serial/StartWithK8s 9.88
254 TestNoKubernetes/serial/StartWithStopK8s 5.3
255 TestNoKubernetes/serial/Start 5.29
259 TestNoKubernetes/serial/StartNoArgs 5.31
261 TestNetworkPlugins/group/auto/Start 9.96
262 TestNetworkPlugins/group/kindnet/Start 10.03
263 TestNetworkPlugins/group/calico/Start 9.85
264 TestNetworkPlugins/group/custom-flannel/Start 9.73
265 TestNetworkPlugins/group/false/Start 9.84
266 TestNetworkPlugins/group/enable-default-cni/Start 9.7
267 TestNetworkPlugins/group/flannel/Start 9.73
269 TestNetworkPlugins/group/bridge/Start 9.8
270 TestNetworkPlugins/group/kubenet/Start 10
272 TestStartStop/group/old-k8s-version/serial/FirstStart 9.96
274 TestStartStop/group/embed-certs/serial/FirstStart 9.84
275 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
276 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
279 TestStartStop/group/embed-certs/serial/DeployApp 0.09
280 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
281 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.14
284 TestStartStop/group/embed-certs/serial/SecondStart 5.31
285 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
286 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
287 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
288 TestStartStop/group/old-k8s-version/serial/Pause 0.1
290 TestStartStop/group/no-preload/serial/FirstStart 10.03
291 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
292 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
294 TestStartStop/group/embed-certs/serial/Pause 0.1
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.96
297 TestStartStop/group/no-preload/serial/DeployApp 0.09
298 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
304 TestStartStop/group/no-preload/serial/SecondStart 5.25
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/no-preload/serial/Pause 0.1
312 TestStartStop/group/newest-cni/serial/FirstStart 9.84
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/SecondStart 5.26
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (10.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-547000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-547000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (10.645290666s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e548f08c-4def-44e8-820b-81be1737a364","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-547000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"95032d1c-4003-426c-a389-cd52b396cac8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18872"}}
	{"specversion":"1.0","id":"c6aaf95e-2ac3-4d49-a7ee-afe82f5d087c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig"}}
	{"specversion":"1.0","id":"a485ee6b-354e-4691-a979-3ef157030d87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4cda5926-078c-4f2e-a8c6-e0b4750bf51b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"854bedc2-30eb-4e5d-a1ad-43a03362dc1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube"}}
	{"specversion":"1.0","id":"79b7ce84-9f83-4525-805d-c3e9dfbe1659","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"864ccb26-f595-4619-9a38-78dc20c1b514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1eedce6c-ed7f-485a-89bd-1b9484671715","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"eef043a4-f2f1-4167-922e-c8e901eadd96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c817fc10-ce6d-43b2-84f8-9a591cf9b788","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-547000\" primary control-plane node in \"download-only-547000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec344370-274f-4085-b434-1a60ceee3cc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4cf6cbc7-0d3f-4d2e-902b-97de1df80001","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108e55320 0x108e55320 0x108e55320 0x108e55320 0x108e55320 0x108e55320 0x108e55320] Decompressors:map[bz2:0x1400000f170 gz:0x1400000f178 tar:0x1400000f110 tar.bz2:0x1400000f120 tar.gz:0x1400000f140 tar.xz:0x1400000f150 tar.zst:0x1400000f160 tbz2:0x1400000f120 tgz:0x1
400000f140 txz:0x1400000f150 tzst:0x1400000f160 xz:0x1400000f180 zip:0x1400000f190 zst:0x1400000f188] Getters:map[file:0x140015aa6f0 http:0x14000654280 https:0x140006542d0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"b0b9a38a-a9f7-4c6e-8960-60a3e91f8fdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:18:14.164304   35058 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:18:14.164435   35058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:18:14.164445   35058 out.go:304] Setting ErrFile to fd 2...
	I0513 17:18:14.164448   35058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:18:14.164555   35058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	W0513 17:18:14.164639   35058 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18872-34554/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18872-34554/.minikube/config/config.json: no such file or directory
	I0513 17:18:14.165995   35058 out.go:298] Setting JSON to true
	I0513 17:18:14.182739   35058 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26264,"bootTime":1715619630,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:18:14.182818   35058 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:18:14.189915   35058 out.go:97] [download-only-547000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:18:14.192977   35058 out.go:169] MINIKUBE_LOCATION=18872
	I0513 17:18:14.190037   35058 notify.go:220] Checking for updates...
	W0513 17:18:14.190054   35058 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball: no such file or directory
	I0513 17:18:14.199863   35058 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:18:14.202901   35058 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:18:14.205822   35058 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:18:14.208906   35058 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	W0513 17:18:14.213327   35058 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0513 17:18:14.213517   35058 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:18:14.216896   35058 out.go:97] Using the qemu2 driver based on user configuration
	I0513 17:18:14.216914   35058 start.go:297] selected driver: qemu2
	I0513 17:18:14.216937   35058 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:18:14.217005   35058 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:18:14.219883   35058 out.go:169] Automatically selected the socket_vmnet network
	I0513 17:18:14.225913   35058 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0513 17:18:14.226016   35058 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0513 17:18:14.226095   35058 cni.go:84] Creating CNI manager for ""
	I0513 17:18:14.226113   35058 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0513 17:18:14.226174   35058 start.go:340] cluster config:
	{Name:download-only-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0513 17:18:14.231095   35058 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:18:14.235884   35058 out.go:97] Downloading VM boot image ...
	I0513 17:18:14.235909   35058 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso
	I0513 17:18:18.360722   35058 out.go:97] Starting "download-only-547000" primary control-plane node in "download-only-547000" cluster
	I0513 17:18:18.360769   35058 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 17:18:18.417474   35058 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0513 17:18:18.417500   35058 cache.go:56] Caching tarball of preloaded images
	I0513 17:18:18.417661   35058 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 17:18:18.422694   35058 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0513 17:18:18.422701   35058 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0513 17:18:18.494447   35058 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0513 17:18:23.694307   35058 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0513 17:18:23.694463   35058 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0513 17:18:24.390051   35058 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0513 17:18:24.390264   35058 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/download-only-547000/config.json ...
	I0513 17:18:24.390282   35058 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/download-only-547000/config.json: {Name:mk00910a7732fd1fca67979e6d1118b3602b6c2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:18:24.390529   35058 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 17:18:24.391402   35058 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0513 17:18:24.732496   35058 out.go:169] 
	W0513 17:18:24.738665   35058 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108e55320 0x108e55320 0x108e55320 0x108e55320 0x108e55320 0x108e55320 0x108e55320] Decompressors:map[bz2:0x1400000f170 gz:0x1400000f178 tar:0x1400000f110 tar.bz2:0x1400000f120 tar.gz:0x1400000f140 tar.xz:0x1400000f150 tar.zst:0x1400000f160 tbz2:0x1400000f120 tgz:0x1400000f140 txz:0x1400000f150 tzst:0x1400000f160 xz:0x1400000f180 zip:0x1400000f190 zst:0x1400000f188] Getters:map[file:0x140015aa6f0 http:0x14000654280 https:0x140006542d0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0513 17:18:24.738686   35058 out_reason.go:110] 
	W0513 17:18:24.745502   35058 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:18:24.749489   35058 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-547000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (10.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-281000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-281000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.904218833s)

                                                
                                                
-- stdout --
	* [offline-docker-281000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-281000" primary control-plane node in "offline-docker-281000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-281000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:29:36.338304   36580 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:29:36.338461   36580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:29:36.338465   36580 out.go:304] Setting ErrFile to fd 2...
	I0513 17:29:36.338474   36580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:29:36.338603   36580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:29:36.339894   36580 out.go:298] Setting JSON to false
	I0513 17:29:36.357501   36580 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26946,"bootTime":1715619630,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:29:36.357595   36580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:29:36.366101   36580 out.go:177] * [offline-docker-281000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:29:36.369174   36580 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:29:36.369193   36580 notify.go:220] Checking for updates...
	I0513 17:29:36.373151   36580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:29:36.376124   36580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:29:36.379119   36580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:29:36.382159   36580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:29:36.385158   36580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:29:36.388409   36580 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:29:36.388469   36580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:29:36.392114   36580 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:29:36.399055   36580 start.go:297] selected driver: qemu2
	I0513 17:29:36.399067   36580 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:29:36.399074   36580 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:29:36.401042   36580 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:29:36.404047   36580 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:29:36.407191   36580 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:29:36.407208   36580 cni.go:84] Creating CNI manager for ""
	I0513 17:29:36.407220   36580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:29:36.407223   36580 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:29:36.407259   36580 start.go:340] cluster config:
	{Name:offline-docker-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client So
cketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:29:36.411662   36580 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:29:36.419097   36580 out.go:177] * Starting "offline-docker-281000" primary control-plane node in "offline-docker-281000" cluster
	I0513 17:29:36.422879   36580 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:29:36.422908   36580 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:29:36.422914   36580 cache.go:56] Caching tarball of preloaded images
	I0513 17:29:36.422995   36580 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:29:36.423008   36580 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:29:36.423076   36580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/offline-docker-281000/config.json ...
	I0513 17:29:36.423087   36580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/offline-docker-281000/config.json: {Name:mkd186d57cf8ea616c7f9572acd2b977aa34afe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:29:36.423383   36580 start.go:360] acquireMachinesLock for offline-docker-281000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:29:36.423424   36580 start.go:364] duration metric: took 32.209µs to acquireMachinesLock for "offline-docker-281000"
	I0513 17:29:36.423437   36580 start.go:93] Provisioning new machine with config: &{Name:offline-docker-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-dock
er-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:29:36.423466   36580 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:29:36.428119   36580 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0513 17:29:36.443849   36580 start.go:159] libmachine.API.Create for "offline-docker-281000" (driver="qemu2")
	I0513 17:29:36.443877   36580 client.go:168] LocalClient.Create starting
	I0513 17:29:36.443945   36580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:29:36.443976   36580 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:36.443986   36580 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:36.444037   36580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:29:36.444059   36580 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:36.444067   36580 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:36.444454   36580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:29:36.585152   36580 main.go:141] libmachine: Creating SSH key...
	I0513 17:29:36.747033   36580 main.go:141] libmachine: Creating Disk image...
	I0513 17:29:36.747043   36580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:29:36.750708   36580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/disk.qcow2
	I0513 17:29:36.769955   36580 main.go:141] libmachine: STDOUT: 
	I0513 17:29:36.769987   36580 main.go:141] libmachine: STDERR: 
	I0513 17:29:36.770070   36580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/disk.qcow2 +20000M
	I0513 17:29:36.783925   36580 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:29:36.783948   36580 main.go:141] libmachine: STDERR: 
	I0513 17:29:36.783979   36580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/disk.qcow2
	I0513 17:29:36.783983   36580 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:29:36.784023   36580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:12:6f:e6:1a:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/disk.qcow2
	I0513 17:29:36.785988   36580 main.go:141] libmachine: STDOUT: 
	I0513 17:29:36.786006   36580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:29:36.786023   36580 client.go:171] duration metric: took 342.147625ms to LocalClient.Create
	I0513 17:29:38.788071   36580 start.go:128] duration metric: took 2.364643459s to createHost
	I0513 17:29:38.788097   36580 start.go:83] releasing machines lock for "offline-docker-281000", held for 2.364715291s
	W0513 17:29:38.788116   36580 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:29:38.809253   36580 out.go:177] * Deleting "offline-docker-281000" in qemu2 ...
	W0513 17:29:38.822257   36580 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:29:38.822268   36580 start.go:728] Will try again in 5 seconds ...
	I0513 17:29:43.824255   36580 start.go:360] acquireMachinesLock for offline-docker-281000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:29:43.824367   36580 start.go:364] duration metric: took 87.708µs to acquireMachinesLock for "offline-docker-281000"
	I0513 17:29:43.824396   36580 start.go:93] Provisioning new machine with config: &{Name:offline-docker-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-dock
er-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:29:43.824475   36580 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:29:43.828755   36580 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0513 17:29:43.844201   36580 start.go:159] libmachine.API.Create for "offline-docker-281000" (driver="qemu2")
	I0513 17:29:43.844229   36580 client.go:168] LocalClient.Create starting
	I0513 17:29:43.844297   36580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:29:43.844328   36580 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:43.844336   36580 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:43.844373   36580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:29:43.844396   36580 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:43.844404   36580 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:43.844685   36580 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:29:43.981972   36580 main.go:141] libmachine: Creating SSH key...
	I0513 17:29:44.147581   36580 main.go:141] libmachine: Creating Disk image...
	I0513 17:29:44.147592   36580 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:29:44.147810   36580 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/disk.qcow2
	I0513 17:29:44.160741   36580 main.go:141] libmachine: STDOUT: 
	I0513 17:29:44.160767   36580 main.go:141] libmachine: STDERR: 
	I0513 17:29:44.160824   36580 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/disk.qcow2 +20000M
	I0513 17:29:44.172137   36580 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:29:44.172168   36580 main.go:141] libmachine: STDERR: 
	I0513 17:29:44.172182   36580 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/disk.qcow2
	I0513 17:29:44.172187   36580 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:29:44.172217   36580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:4a:73:77:9c:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/offline-docker-281000/disk.qcow2
	I0513 17:29:44.173962   36580 main.go:141] libmachine: STDOUT: 
	I0513 17:29:44.173980   36580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:29:44.173991   36580 client.go:171] duration metric: took 329.7655ms to LocalClient.Create
	I0513 17:29:46.176169   36580 start.go:128] duration metric: took 2.35170825s to createHost
	I0513 17:29:46.176291   36580 start.go:83] releasing machines lock for "offline-docker-281000", held for 2.351959s
	W0513 17:29:46.176644   36580 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:29:46.183340   36580 out.go:177] 
	W0513 17:29:46.187319   36580 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:29:46.187357   36580 out.go:239] * 
	* 
	W0513 17:29:46.189757   36580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:29:46.199265   36580 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-281000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-05-13 17:29:46.2155 -0700 PDT m=+692.185156001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-281000 -n offline-docker-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-281000 -n offline-docker-281000: exit status 7 (66.670583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-281000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-281000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-281000
--- FAIL: TestOffline (10.07s)

                                                
                                    
x
+
TestAddons/Setup (10.46s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-521000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-521000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.457457709s)

                                                
                                                
-- stdout --
	* [addons-521000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-521000" primary control-plane node in "addons-521000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-521000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:18:36.185737   35170 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:18:36.185864   35170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:18:36.185867   35170 out.go:304] Setting ErrFile to fd 2...
	I0513 17:18:36.185870   35170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:18:36.186000   35170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:18:36.187034   35170 out.go:298] Setting JSON to false
	I0513 17:18:36.203016   35170 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26286,"bootTime":1715619630,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:18:36.203086   35170 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:18:36.208308   35170 out.go:177] * [addons-521000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:18:36.214228   35170 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:18:36.218301   35170 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:18:36.214255   35170 notify.go:220] Checking for updates...
	I0513 17:18:36.223257   35170 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:18:36.226329   35170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:18:36.229283   35170 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:18:36.232220   35170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:18:36.235397   35170 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:18:36.239301   35170 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:18:36.246216   35170 start.go:297] selected driver: qemu2
	I0513 17:18:36.246224   35170 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:18:36.246230   35170 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:18:36.248491   35170 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:18:36.251323   35170 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:18:36.254283   35170 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:18:36.254299   35170 cni.go:84] Creating CNI manager for ""
	I0513 17:18:36.254305   35170 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:18:36.254313   35170 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:18:36.254337   35170 start.go:340] cluster config:
	{Name:addons-521000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/v
ar/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:18:36.258695   35170 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:18:36.267088   35170 out.go:177] * Starting "addons-521000" primary control-plane node in "addons-521000" cluster
	I0513 17:18:36.271290   35170 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:18:36.271307   35170 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:18:36.271317   35170 cache.go:56] Caching tarball of preloaded images
	I0513 17:18:36.271386   35170 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:18:36.271392   35170 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:18:36.271607   35170 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/addons-521000/config.json ...
	I0513 17:18:36.271618   35170 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/addons-521000/config.json: {Name:mked6c3ef3c36345b2498c39d9731ea5c95b5e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:18:36.271848   35170 start.go:360] acquireMachinesLock for addons-521000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:18:36.272042   35170 start.go:364] duration metric: took 187.916µs to acquireMachinesLock for "addons-521000"
	I0513 17:18:36.272054   35170 start.go:93] Provisioning new machine with config: &{Name:addons-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-521000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:18:36.272079   35170 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:18:36.281204   35170 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0513 17:18:36.298221   35170 start.go:159] libmachine.API.Create for "addons-521000" (driver="qemu2")
	I0513 17:18:36.298247   35170 client.go:168] LocalClient.Create starting
	I0513 17:18:36.298360   35170 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:18:36.357205   35170 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:18:36.450367   35170 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:18:36.924991   35170 main.go:141] libmachine: Creating SSH key...
	I0513 17:18:37.216674   35170 main.go:141] libmachine: Creating Disk image...
	I0513 17:18:37.216685   35170 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:18:37.216912   35170 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/disk.qcow2
	I0513 17:18:37.230172   35170 main.go:141] libmachine: STDOUT: 
	I0513 17:18:37.230205   35170 main.go:141] libmachine: STDERR: 
	I0513 17:18:37.230266   35170 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/disk.qcow2 +20000M
	I0513 17:18:37.241279   35170 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:18:37.241294   35170 main.go:141] libmachine: STDERR: 
	I0513 17:18:37.241320   35170 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/disk.qcow2
	I0513 17:18:37.241326   35170 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:18:37.241369   35170 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:1c:ee:51:7a:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/disk.qcow2
	I0513 17:18:37.243037   35170 main.go:141] libmachine: STDOUT: 
	I0513 17:18:37.243055   35170 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:18:37.243085   35170 client.go:171] duration metric: took 944.831416ms to LocalClient.Create
	I0513 17:18:39.245249   35170 start.go:128] duration metric: took 2.973169291s to createHost
	I0513 17:18:39.245303   35170 start.go:83] releasing machines lock for "addons-521000", held for 2.973274584s
	W0513 17:18:39.245372   35170 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:18:39.259697   35170 out.go:177] * Deleting "addons-521000" in qemu2 ...
	W0513 17:18:39.285196   35170 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:18:39.285223   35170 start.go:728] Will try again in 5 seconds ...
	I0513 17:18:44.287371   35170 start.go:360] acquireMachinesLock for addons-521000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:18:44.287799   35170 start.go:364] duration metric: took 345.917µs to acquireMachinesLock for "addons-521000"
	I0513 17:18:44.287954   35170 start.go:93] Provisioning new machine with config: &{Name:addons-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-521000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:18:44.288210   35170 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:18:44.295872   35170 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0513 17:18:44.342221   35170 start.go:159] libmachine.API.Create for "addons-521000" (driver="qemu2")
	I0513 17:18:44.342261   35170 client.go:168] LocalClient.Create starting
	I0513 17:18:44.342369   35170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:18:44.342438   35170 main.go:141] libmachine: Decoding PEM data...
	I0513 17:18:44.342453   35170 main.go:141] libmachine: Parsing certificate...
	I0513 17:18:44.342546   35170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:18:44.342606   35170 main.go:141] libmachine: Decoding PEM data...
	I0513 17:18:44.342626   35170 main.go:141] libmachine: Parsing certificate...
	I0513 17:18:44.343120   35170 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:18:44.506939   35170 main.go:141] libmachine: Creating SSH key...
	I0513 17:18:44.548478   35170 main.go:141] libmachine: Creating Disk image...
	I0513 17:18:44.548487   35170 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:18:44.548676   35170 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/disk.qcow2
	I0513 17:18:44.561293   35170 main.go:141] libmachine: STDOUT: 
	I0513 17:18:44.561318   35170 main.go:141] libmachine: STDERR: 
	I0513 17:18:44.561384   35170 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/disk.qcow2 +20000M
	I0513 17:18:44.572385   35170 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:18:44.572406   35170 main.go:141] libmachine: STDERR: 
	I0513 17:18:44.572422   35170 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/disk.qcow2
	I0513 17:18:44.572430   35170 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:18:44.572464   35170 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:3d:cf:9e:d0:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/addons-521000/disk.qcow2
	I0513 17:18:44.574227   35170 main.go:141] libmachine: STDOUT: 
	I0513 17:18:44.574242   35170 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:18:44.574260   35170 client.go:171] duration metric: took 231.996208ms to LocalClient.Create
	I0513 17:18:46.576501   35170 start.go:128] duration metric: took 2.288263291s to createHost
	I0513 17:18:46.576595   35170 start.go:83] releasing machines lock for "addons-521000", held for 2.288762s
	W0513 17:18:46.576931   35170 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:18:46.585833   35170 out.go:177] 
	W0513 17:18:46.590446   35170 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:18:46.590483   35170 out.go:239] * 
	* 
	W0513 17:18:46.592799   35170 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:18:46.605365   35170 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-521000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.46s)

                                                
                                    
x
+
TestCertOptions (10.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-398000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-398000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.804319334s)

                                                
                                                
-- stdout --
	* [cert-options-398000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-398000" primary control-plane node in "cert-options-398000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-398000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-398000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-398000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-398000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-398000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (82.245916ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-398000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-398000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-398000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-398000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-398000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-398000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.408333ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-398000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-398000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-398000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-398000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-398000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-05-13 17:30:18.523015 -0700 PDT m=+724.493316460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-398000 -n cert-options-398000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-398000 -n cert-options-398000: exit status 7 (28.996667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-398000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-398000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-398000
--- FAIL: TestCertOptions (10.09s)

                                                
                                    
x
+
TestCertExpiration (195.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-880000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-880000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.040519834s)

                                                
                                                
-- stdout --
	* [cert-expiration-880000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-880000" primary control-plane node in "cert-expiration-880000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-880000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-880000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-880000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-880000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-880000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.24456125s)

                                                
                                                
-- stdout --
	* [cert-expiration-880000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-880000" primary control-plane node in "cert-expiration-880000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-880000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-880000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-880000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-880000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-880000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-880000" primary control-plane node in "cert-expiration-880000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-880000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-880000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-880000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-05-13 17:33:18.671898 -0700 PDT m=+904.645798460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-880000 -n cert-expiration-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-880000 -n cert-expiration-880000: exit status 7 (62.012875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-880000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-880000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-880000
--- FAIL: TestCertExpiration (195.46s)

                                                
                                    
x
+
TestDockerFlags (10.19s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-887000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-887000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.926381292s)

                                                
                                                
-- stdout --
	* [docker-flags-887000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-887000" primary control-plane node in "docker-flags-887000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-887000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:29:58.404377   36777 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:29:58.404527   36777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:29:58.404533   36777 out.go:304] Setting ErrFile to fd 2...
	I0513 17:29:58.404535   36777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:29:58.404662   36777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:29:58.405719   36777 out.go:298] Setting JSON to false
	I0513 17:29:58.421807   36777 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26968,"bootTime":1715619630,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:29:58.421878   36777 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:29:58.426102   36777 out.go:177] * [docker-flags-887000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:29:58.435916   36777 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:29:58.435971   36777 notify.go:220] Checking for updates...
	I0513 17:29:58.439891   36777 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:29:58.442829   36777 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:29:58.445887   36777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:29:58.448881   36777 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:29:58.451876   36777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:29:58.455259   36777 config.go:182] Loaded profile config "force-systemd-flag-448000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:29:58.455327   36777 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:29:58.455390   36777 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:29:58.459867   36777 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:29:58.466867   36777 start.go:297] selected driver: qemu2
	I0513 17:29:58.466876   36777 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:29:58.466885   36777 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:29:58.469153   36777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:29:58.472859   36777 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:29:58.475937   36777 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0513 17:29:58.475960   36777 cni.go:84] Creating CNI manager for ""
	I0513 17:29:58.475970   36777 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:29:58.475979   36777 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:29:58.476023   36777 start.go:340] cluster config:
	{Name:docker-flags-887000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-887000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket
_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:29:58.480473   36777 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:29:58.487872   36777 out.go:177] * Starting "docker-flags-887000" primary control-plane node in "docker-flags-887000" cluster
	I0513 17:29:58.491831   36777 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:29:58.491845   36777 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:29:58.491852   36777 cache.go:56] Caching tarball of preloaded images
	I0513 17:29:58.491903   36777 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:29:58.491910   36777 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:29:58.491966   36777 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/docker-flags-887000/config.json ...
	I0513 17:29:58.491978   36777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/docker-flags-887000/config.json: {Name:mk96c1ec35514fb19c893211ea5df915f6aa8722 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:29:58.492191   36777 start.go:360] acquireMachinesLock for docker-flags-887000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:29:58.492235   36777 start.go:364] duration metric: took 35.292µs to acquireMachinesLock for "docker-flags-887000"
	I0513 17:29:58.492249   36777 start.go:93] Provisioning new machine with config: &{Name:docker-flags-887000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:docker-flags-887000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:29:58.492281   36777 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:29:58.495980   36777 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0513 17:29:58.513008   36777 start.go:159] libmachine.API.Create for "docker-flags-887000" (driver="qemu2")
	I0513 17:29:58.513032   36777 client.go:168] LocalClient.Create starting
	I0513 17:29:58.513087   36777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:29:58.513115   36777 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:58.513125   36777 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:58.513159   36777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:29:58.513181   36777 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:58.513187   36777 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:58.513532   36777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:29:58.653598   36777 main.go:141] libmachine: Creating SSH key...
	I0513 17:29:58.756679   36777 main.go:141] libmachine: Creating Disk image...
	I0513 17:29:58.756684   36777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:29:58.756869   36777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/disk.qcow2
	I0513 17:29:58.769607   36777 main.go:141] libmachine: STDOUT: 
	I0513 17:29:58.769624   36777 main.go:141] libmachine: STDERR: 
	I0513 17:29:58.769686   36777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/disk.qcow2 +20000M
	I0513 17:29:58.780475   36777 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:29:58.780491   36777 main.go:141] libmachine: STDERR: 
	I0513 17:29:58.780511   36777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/disk.qcow2
	I0513 17:29:58.780516   36777 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:29:58.780548   36777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:d1:92:2f:ee:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/disk.qcow2
	I0513 17:29:58.782249   36777 main.go:141] libmachine: STDOUT: 
	I0513 17:29:58.782263   36777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:29:58.782282   36777 client.go:171] duration metric: took 269.251625ms to LocalClient.Create
	I0513 17:30:00.784450   36777 start.go:128] duration metric: took 2.292188459s to createHost
	I0513 17:30:00.784561   36777 start.go:83] releasing machines lock for "docker-flags-887000", held for 2.292332208s
	W0513 17:30:00.784614   36777 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:30:00.795664   36777 out.go:177] * Deleting "docker-flags-887000" in qemu2 ...
	W0513 17:30:00.821403   36777 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:30:00.821441   36777 start.go:728] Will try again in 5 seconds ...
	I0513 17:30:05.823551   36777 start.go:360] acquireMachinesLock for docker-flags-887000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:30:05.824308   36777 start.go:364] duration metric: took 641.375µs to acquireMachinesLock for "docker-flags-887000"
	I0513 17:30:05.824435   36777 start.go:93] Provisioning new machine with config: &{Name:docker-flags-887000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:docker-flags-887000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:30:05.824650   36777 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:30:05.835313   36777 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0513 17:30:05.883435   36777 start.go:159] libmachine.API.Create for "docker-flags-887000" (driver="qemu2")
	I0513 17:30:05.883486   36777 client.go:168] LocalClient.Create starting
	I0513 17:30:05.883602   36777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:30:05.883666   36777 main.go:141] libmachine: Decoding PEM data...
	I0513 17:30:05.883682   36777 main.go:141] libmachine: Parsing certificate...
	I0513 17:30:05.883752   36777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:30:05.883795   36777 main.go:141] libmachine: Decoding PEM data...
	I0513 17:30:05.883812   36777 main.go:141] libmachine: Parsing certificate...
	I0513 17:30:05.884469   36777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:30:06.041526   36777 main.go:141] libmachine: Creating SSH key...
	I0513 17:30:06.222873   36777 main.go:141] libmachine: Creating Disk image...
	I0513 17:30:06.222879   36777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:30:06.223091   36777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/disk.qcow2
	I0513 17:30:06.235927   36777 main.go:141] libmachine: STDOUT: 
	I0513 17:30:06.235955   36777 main.go:141] libmachine: STDERR: 
	I0513 17:30:06.236018   36777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/disk.qcow2 +20000M
	I0513 17:30:06.247191   36777 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:30:06.247206   36777 main.go:141] libmachine: STDERR: 
	I0513 17:30:06.247220   36777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/disk.qcow2
	I0513 17:30:06.247224   36777 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:30:06.247257   36777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:89:d9:0c:66:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/docker-flags-887000/disk.qcow2
	I0513 17:30:06.248906   36777 main.go:141] libmachine: STDOUT: 
	I0513 17:30:06.248921   36777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:30:06.248942   36777 client.go:171] duration metric: took 365.457125ms to LocalClient.Create
	I0513 17:30:08.251078   36777 start.go:128] duration metric: took 2.426448708s to createHost
	I0513 17:30:08.251131   36777 start.go:83] releasing machines lock for "docker-flags-887000", held for 2.426848959s
	W0513 17:30:08.251436   36777 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-887000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-887000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:30:08.266693   36777 out.go:177] 
	W0513 17:30:08.275853   36777 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:30:08.275881   36777 out.go:239] * 
	* 
	W0513 17:30:08.278527   36777 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:30:08.287528   36777 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-887000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-887000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-887000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (78.405ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-887000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-887000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-887000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-887000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-887000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-887000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-887000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-887000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-887000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (48.716834ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-887000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-887000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-887000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-887000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-887000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-887000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-05-13 17:30:08.433645 -0700 PDT m=+714.403744126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-887000 -n docker-flags-887000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-887000 -n docker-flags-887000: exit status 7 (28.340583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-887000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-887000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-887000
--- FAIL: TestDockerFlags (10.19s)

                                                
                                    
x
+
TestForceSystemdFlag (10.03s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-448000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-448000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.815750666s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-448000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-448000" primary control-plane node in "force-systemd-flag-448000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-448000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:29:53.384782   36755 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:29:53.384922   36755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:29:53.384926   36755 out.go:304] Setting ErrFile to fd 2...
	I0513 17:29:53.384929   36755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:29:53.385063   36755 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:29:53.386108   36755 out.go:298] Setting JSON to false
	I0513 17:29:53.402214   36755 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26963,"bootTime":1715619630,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:29:53.402264   36755 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:29:53.409071   36755 out.go:177] * [force-systemd-flag-448000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:29:53.415087   36755 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:29:53.415155   36755 notify.go:220] Checking for updates...
	I0513 17:29:53.423057   36755 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:29:53.426017   36755 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:29:53.429047   36755 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:29:53.432020   36755 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:29:53.435036   36755 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:29:53.438352   36755 config.go:182] Loaded profile config "force-systemd-env-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:29:53.438422   36755 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:29:53.438473   36755 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:29:53.442891   36755 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:29:53.450037   36755 start.go:297] selected driver: qemu2
	I0513 17:29:53.450051   36755 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:29:53.450062   36755 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:29:53.452337   36755 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:29:53.455019   36755 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:29:53.458067   36755 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0513 17:29:53.458080   36755 cni.go:84] Creating CNI manager for ""
	I0513 17:29:53.458087   36755 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:29:53.458091   36755 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:29:53.458122   36755 start.go:340] cluster config:
	{Name:force-systemd-flag-448000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-448000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:29:53.462617   36755 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:29:53.470051   36755 out.go:177] * Starting "force-systemd-flag-448000" primary control-plane node in "force-systemd-flag-448000" cluster
	I0513 17:29:53.478043   36755 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:29:53.478060   36755 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:29:53.478080   36755 cache.go:56] Caching tarball of preloaded images
	I0513 17:29:53.478154   36755 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:29:53.478160   36755 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:29:53.478252   36755 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/force-systemd-flag-448000/config.json ...
	I0513 17:29:53.478265   36755 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/force-systemd-flag-448000/config.json: {Name:mk4c4194fe2874604493ef390d32305cfd691d5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:29:53.478678   36755 start.go:360] acquireMachinesLock for force-systemd-flag-448000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:29:53.478715   36755 start.go:364] duration metric: took 30.083µs to acquireMachinesLock for "force-systemd-flag-448000"
	I0513 17:29:53.478729   36755 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-448000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-sy
stemd-flag-448000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:29:53.478771   36755 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:29:53.481993   36755 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0513 17:29:53.499855   36755 start.go:159] libmachine.API.Create for "force-systemd-flag-448000" (driver="qemu2")
	I0513 17:29:53.499876   36755 client.go:168] LocalClient.Create starting
	I0513 17:29:53.499946   36755 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:29:53.499977   36755 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:53.499986   36755 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:53.500023   36755 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:29:53.500046   36755 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:53.500054   36755 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:53.500578   36755 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:29:53.641318   36755 main.go:141] libmachine: Creating SSH key...
	I0513 17:29:53.724489   36755 main.go:141] libmachine: Creating Disk image...
	I0513 17:29:53.724494   36755 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:29:53.724681   36755 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/disk.qcow2
	I0513 17:29:53.736995   36755 main.go:141] libmachine: STDOUT: 
	I0513 17:29:53.737019   36755 main.go:141] libmachine: STDERR: 
	I0513 17:29:53.737072   36755 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/disk.qcow2 +20000M
	I0513 17:29:53.747724   36755 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:29:53.747740   36755 main.go:141] libmachine: STDERR: 
	I0513 17:29:53.747754   36755 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/disk.qcow2
	I0513 17:29:53.747758   36755 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:29:53.747783   36755 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:c4:de:05:1e:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/disk.qcow2
	I0513 17:29:53.749413   36755 main.go:141] libmachine: STDOUT: 
	I0513 17:29:53.749429   36755 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:29:53.749454   36755 client.go:171] duration metric: took 249.578583ms to LocalClient.Create
	I0513 17:29:55.751585   36755 start.go:128] duration metric: took 2.27283775s to createHost
	I0513 17:29:55.751652   36755 start.go:83] releasing machines lock for "force-systemd-flag-448000", held for 2.272972875s
	W0513 17:29:55.751695   36755 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:29:55.772693   36755 out.go:177] * Deleting "force-systemd-flag-448000" in qemu2 ...
	W0513 17:29:55.793317   36755 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:29:55.793334   36755 start.go:728] Will try again in 5 seconds ...
	I0513 17:30:00.795440   36755 start.go:360] acquireMachinesLock for force-systemd-flag-448000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:30:00.795779   36755 start.go:364] duration metric: took 241.375µs to acquireMachinesLock for "force-systemd-flag-448000"
	I0513 17:30:00.795870   36755 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-448000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-sy
stemd-flag-448000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:30:00.796018   36755 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:30:00.810759   36755 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0513 17:30:00.852714   36755 start.go:159] libmachine.API.Create for "force-systemd-flag-448000" (driver="qemu2")
	I0513 17:30:00.852771   36755 client.go:168] LocalClient.Create starting
	I0513 17:30:00.852926   36755 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:30:00.852999   36755 main.go:141] libmachine: Decoding PEM data...
	I0513 17:30:00.853024   36755 main.go:141] libmachine: Parsing certificate...
	I0513 17:30:00.853095   36755 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:30:00.853148   36755 main.go:141] libmachine: Decoding PEM data...
	I0513 17:30:00.853166   36755 main.go:141] libmachine: Parsing certificate...
	I0513 17:30:00.853863   36755 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:30:01.013135   36755 main.go:141] libmachine: Creating SSH key...
	I0513 17:30:01.091650   36755 main.go:141] libmachine: Creating Disk image...
	I0513 17:30:01.091656   36755 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:30:01.091855   36755 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/disk.qcow2
	I0513 17:30:01.104514   36755 main.go:141] libmachine: STDOUT: 
	I0513 17:30:01.104535   36755 main.go:141] libmachine: STDERR: 
	I0513 17:30:01.104585   36755 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/disk.qcow2 +20000M
	I0513 17:30:01.115390   36755 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:30:01.115411   36755 main.go:141] libmachine: STDERR: 
	I0513 17:30:01.115427   36755 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/disk.qcow2
	I0513 17:30:01.115433   36755 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:30:01.115483   36755 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:d6:d8:2d:a8:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-flag-448000/disk.qcow2
	I0513 17:30:01.117109   36755 main.go:141] libmachine: STDOUT: 
	I0513 17:30:01.117127   36755 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:30:01.117141   36755 client.go:171] duration metric: took 264.36925ms to LocalClient.Create
	I0513 17:30:03.119287   36755 start.go:128] duration metric: took 2.32328925s to createHost
	I0513 17:30:03.119394   36755 start.go:83] releasing machines lock for "force-systemd-flag-448000", held for 2.323602916s
	W0513 17:30:03.119762   36755 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-448000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-448000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:30:03.136244   36755 out.go:177] 
	W0513 17:30:03.144449   36755 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:30:03.144514   36755 out.go:239] * 
	* 
	W0513 17:30:03.147297   36755 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:30:03.160320   36755 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-448000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-448000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-448000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.594291ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-448000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-448000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-448000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-05-13 17:30:03.253824 -0700 PDT m=+709.223820460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-448000 -n force-systemd-flag-448000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-448000 -n force-systemd-flag-448000: exit status 7 (33.2245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-448000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-448000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-448000
--- FAIL: TestForceSystemdFlag (10.03s)

                                                
                                    
x
+
TestForceSystemdEnv (12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-090000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-090000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.786761084s)

                                                
                                                
-- stdout --
	* [force-systemd-env-090000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-090000" primary control-plane node in "force-systemd-env-090000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-090000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:29:46.408183   36723 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:29:46.408311   36723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:29:46.408315   36723 out.go:304] Setting ErrFile to fd 2...
	I0513 17:29:46.408317   36723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:29:46.408430   36723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:29:46.409444   36723 out.go:298] Setting JSON to false
	I0513 17:29:46.425395   36723 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26956,"bootTime":1715619630,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:29:46.425465   36723 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:29:46.431382   36723 out.go:177] * [force-systemd-env-090000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:29:46.438354   36723 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:29:46.443239   36723 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:29:46.438422   36723 notify.go:220] Checking for updates...
	I0513 17:29:46.449362   36723 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:29:46.452341   36723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:29:46.455412   36723 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:29:46.458302   36723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0513 17:29:46.461732   36723 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:29:46.461793   36723 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:29:46.466260   36723 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:29:46.473315   36723 start.go:297] selected driver: qemu2
	I0513 17:29:46.473323   36723 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:29:46.473330   36723 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:29:46.475561   36723 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:29:46.478314   36723 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:29:46.481336   36723 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0513 17:29:46.481348   36723 cni.go:84] Creating CNI manager for ""
	I0513 17:29:46.481356   36723 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:29:46.481360   36723 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:29:46.481398   36723 start.go:340] cluster config:
	{Name:force-systemd-env-090000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-090000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:29:46.485962   36723 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:29:46.493296   36723 out.go:177] * Starting "force-systemd-env-090000" primary control-plane node in "force-systemd-env-090000" cluster
	I0513 17:29:46.497369   36723 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:29:46.497386   36723 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:29:46.497395   36723 cache.go:56] Caching tarball of preloaded images
	I0513 17:29:46.497452   36723 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:29:46.497458   36723 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:29:46.497524   36723 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/force-systemd-env-090000/config.json ...
	I0513 17:29:46.497536   36723 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/force-systemd-env-090000/config.json: {Name:mk78b92aa19564e080ab3983c6eed5de2e321822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:29:46.497756   36723 start.go:360] acquireMachinesLock for force-systemd-env-090000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:29:46.497789   36723 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "force-systemd-env-090000"
	I0513 17:29:46.497802   36723 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-sys
temd-env-090000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:29:46.497832   36723 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:29:46.506300   36723 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0513 17:29:46.522948   36723 start.go:159] libmachine.API.Create for "force-systemd-env-090000" (driver="qemu2")
	I0513 17:29:46.522973   36723 client.go:168] LocalClient.Create starting
	I0513 17:29:46.523033   36723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:29:46.523063   36723 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:46.523074   36723 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:46.523113   36723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:29:46.523135   36723 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:46.523146   36723 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:46.523485   36723 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:29:46.664421   36723 main.go:141] libmachine: Creating SSH key...
	I0513 17:29:46.766942   36723 main.go:141] libmachine: Creating Disk image...
	I0513 17:29:46.766952   36723 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:29:46.767158   36723 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/disk.qcow2
	I0513 17:29:46.779604   36723 main.go:141] libmachine: STDOUT: 
	I0513 17:29:46.779628   36723 main.go:141] libmachine: STDERR: 
	I0513 17:29:46.779692   36723 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/disk.qcow2 +20000M
	I0513 17:29:46.790718   36723 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:29:46.790734   36723 main.go:141] libmachine: STDERR: 
	I0513 17:29:46.790758   36723 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/disk.qcow2
	I0513 17:29:46.790763   36723 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:29:46.790794   36723 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:c8:d2:cb:48:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/disk.qcow2
	I0513 17:29:46.792488   36723 main.go:141] libmachine: STDOUT: 
	I0513 17:29:46.792504   36723 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:29:46.792523   36723 client.go:171] duration metric: took 269.550666ms to LocalClient.Create
	I0513 17:29:48.794685   36723 start.go:128] duration metric: took 2.296882291s to createHost
	I0513 17:29:48.794735   36723 start.go:83] releasing machines lock for "force-systemd-env-090000", held for 2.296975041s
	W0513 17:29:48.794815   36723 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:29:48.801530   36723 out.go:177] * Deleting "force-systemd-env-090000" in qemu2 ...
	W0513 17:29:48.825577   36723 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:29:48.825604   36723 start.go:728] Will try again in 5 seconds ...
	I0513 17:29:53.827648   36723 start.go:360] acquireMachinesLock for force-systemd-env-090000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:29:55.751780   36723 start.go:364] duration metric: took 1.924127125s to acquireMachinesLock for "force-systemd-env-090000"
	I0513 17:29:55.751925   36723 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-sys
temd-env-090000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:29:55.752183   36723 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:29:55.757749   36723 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0513 17:29:55.806679   36723 start.go:159] libmachine.API.Create for "force-systemd-env-090000" (driver="qemu2")
	I0513 17:29:55.806727   36723 client.go:168] LocalClient.Create starting
	I0513 17:29:55.806840   36723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:29:55.806893   36723 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:55.806910   36723 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:55.806976   36723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:29:55.807019   36723 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:55.807031   36723 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:55.807544   36723 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:29:55.963092   36723 main.go:141] libmachine: Creating SSH key...
	I0513 17:29:56.086943   36723 main.go:141] libmachine: Creating Disk image...
	I0513 17:29:56.086949   36723 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:29:56.087139   36723 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/disk.qcow2
	I0513 17:29:56.100111   36723 main.go:141] libmachine: STDOUT: 
	I0513 17:29:56.100132   36723 main.go:141] libmachine: STDERR: 
	I0513 17:29:56.100193   36723 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/disk.qcow2 +20000M
	I0513 17:29:56.111076   36723 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:29:56.111093   36723 main.go:141] libmachine: STDERR: 
	I0513 17:29:56.111109   36723 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/disk.qcow2
	I0513 17:29:56.111120   36723 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:29:56.111153   36723 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:5a:d8:db:f3:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/force-systemd-env-090000/disk.qcow2
	I0513 17:29:56.112895   36723 main.go:141] libmachine: STDOUT: 
	I0513 17:29:56.112912   36723 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:29:56.112923   36723 client.go:171] duration metric: took 306.195084ms to LocalClient.Create
	I0513 17:29:58.114274   36723 start.go:128] duration metric: took 2.362085375s to createHost
	I0513 17:29:58.114333   36723 start.go:83] releasing machines lock for "force-systemd-env-090000", held for 2.362553209s
	W0513 17:29:58.114732   36723 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-090000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-090000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:29:58.131514   36723 out.go:177] 
	W0513 17:29:58.139441   36723 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:29:58.139461   36723 out.go:239] * 
	* 
	W0513 17:29:58.141584   36723 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:29:58.151378   36723 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-090000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-090000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-090000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.915916ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-090000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-090000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-090000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-05-13 17:29:58.246966 -0700 PDT m=+704.216862168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-090000 -n force-systemd-env-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-090000 -n force-systemd-env-090000: exit status 7 (31.974125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-090000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-090000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-090000
--- FAIL: TestForceSystemdEnv (12.00s)

                                                
                                    
x
+
TestErrorSpam/setup (10.02s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-940000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-940000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 --driver=qemu2 : exit status 80 (10.016323792s)

                                                
                                                
-- stdout --
	* [nospam-940000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-940000" primary control-plane node in "nospam-940000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-940000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-940000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-940000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-940000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-940000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18872
- KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-940000" primary control-plane node in "nospam-940000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-940000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-940000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (10.02s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-968000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-968000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.877054042s)

                                                
                                                
-- stdout --
	* [functional-968000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-968000" primary control-plane node in "functional-968000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-968000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:55928 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:55928 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:55928 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-968000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-968000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18872
- KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-968000" primary control-plane node in "functional-968000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-968000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:55928 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:55928 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:55928 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (68.111625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.95s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-968000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-968000 --alsologtostderr -v=8: exit status 80 (5.183634958s)

                                                
                                                
-- stdout --
	* [functional-968000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-968000" primary control-plane node in "functional-968000" cluster
	* Restarting existing qemu2 VM for "functional-968000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-968000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:19:16.432042   35316 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:19:16.432164   35316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:19:16.432167   35316 out.go:304] Setting ErrFile to fd 2...
	I0513 17:19:16.432169   35316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:19:16.432318   35316 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:19:16.433280   35316 out.go:298] Setting JSON to false
	I0513 17:19:16.449456   35316 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26326,"bootTime":1715619630,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:19:16.449525   35316 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:19:16.454703   35316 out.go:177] * [functional-968000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:19:16.461631   35316 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:19:16.465672   35316 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:19:16.461725   35316 notify.go:220] Checking for updates...
	I0513 17:19:16.468626   35316 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:19:16.471641   35316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:19:16.474638   35316 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:19:16.477532   35316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:19:16.480919   35316 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:19:16.480971   35316 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:19:16.485599   35316 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:19:16.492602   35316 start.go:297] selected driver: qemu2
	I0513 17:19:16.492610   35316 start.go:901] validating driver "qemu2" against &{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-968000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:19:16.492688   35316 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:19:16.494937   35316 cni.go:84] Creating CNI manager for ""
	I0513 17:19:16.494952   35316 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:19:16.495003   35316 start.go:340] cluster config:
	{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-968000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:19:16.499241   35316 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:19:16.507618   35316 out.go:177] * Starting "functional-968000" primary control-plane node in "functional-968000" cluster
	I0513 17:19:16.511569   35316 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:19:16.511584   35316 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:19:16.511592   35316 cache.go:56] Caching tarball of preloaded images
	I0513 17:19:16.511646   35316 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:19:16.511651   35316 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:19:16.511707   35316 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/functional-968000/config.json ...
	I0513 17:19:16.512104   35316 start.go:360] acquireMachinesLock for functional-968000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:19:16.512132   35316 start.go:364] duration metric: took 20.916µs to acquireMachinesLock for "functional-968000"
	I0513 17:19:16.512145   35316 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:19:16.512151   35316 fix.go:54] fixHost starting: 
	I0513 17:19:16.512261   35316 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
	W0513 17:19:16.512270   35316 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:19:16.520635   35316 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
	I0513 17:19:16.524666   35316 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:05:d4:05:ca:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/disk.qcow2
	I0513 17:19:16.526667   35316 main.go:141] libmachine: STDOUT: 
	I0513 17:19:16.526691   35316 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:19:16.526718   35316 fix.go:56] duration metric: took 14.567333ms for fixHost
	I0513 17:19:16.526723   35316 start.go:83] releasing machines lock for "functional-968000", held for 14.583541ms
	W0513 17:19:16.526728   35316 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:19:16.526771   35316 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:19:16.526775   35316 start.go:728] Will try again in 5 seconds ...
	I0513 17:19:21.528872   35316 start.go:360] acquireMachinesLock for functional-968000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:19:21.529315   35316 start.go:364] duration metric: took 366.917µs to acquireMachinesLock for "functional-968000"
	I0513 17:19:21.529435   35316 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:19:21.529453   35316 fix.go:54] fixHost starting: 
	I0513 17:19:21.530230   35316 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
	W0513 17:19:21.530257   35316 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:19:21.537643   35316 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
	I0513 17:19:21.541836   35316 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:05:d4:05:ca:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/disk.qcow2
	I0513 17:19:21.550575   35316 main.go:141] libmachine: STDOUT: 
	I0513 17:19:21.550666   35316 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:19:21.550723   35316 fix.go:56] duration metric: took 21.271625ms for fixHost
	I0513 17:19:21.550743   35316 start.go:83] releasing machines lock for "functional-968000", held for 21.406459ms
	W0513 17:19:21.550921   35316 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:19:21.555922   35316 out.go:177] 
	W0513 17:19:21.559678   35316 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:19:21.559713   35316 out.go:239] * 
	* 
	W0513 17:19:21.562106   35316 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:19:21.572567   35316 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-968000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.185372708s for "functional-968000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (65.89425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (30.501625ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-968000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (29.527958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-968000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-968000 get po -A: exit status 1 (26.141084ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-968000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-968000\n"*: args "kubectl --context functional-968000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-968000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (29.302958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl images: exit status 83 (41.945167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (39.896375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-968000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.8135ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.872375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-968000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 kubectl -- --context functional-968000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 kubectl -- --context functional-968000 get pods: exit status 1 (598.28875ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-968000
	* no server found for cluster "functional-968000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-968000 kubectl -- --context functional-968000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (30.889125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-968000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-968000 get pods: exit status 1 (917.81325ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-968000
	* no server found for cluster "functional-968000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-968000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (28.125709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.95s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-968000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-968000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.172329667s)

                                                
                                                
-- stdout --
	* [functional-968000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-968000" primary control-plane node in "functional-968000" cluster
	* Restarting existing qemu2 VM for "functional-968000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-968000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-968000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.17286225s for "functional-968000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (69.919083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-968000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-968000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.909292ms)

                                                
                                                
** stderr ** 
	error: context "functional-968000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-968000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (29.182583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 logs: exit status 83 (73.756625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-547000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | -p download-only-547000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
	| delete  | -p download-only-547000                                                  | download-only-547000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
	| start   | -o=json --download-only                                                  | download-only-115000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | -p download-only-115000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
	| delete  | -p download-only-115000                                                  | download-only-115000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
	| delete  | -p download-only-547000                                                  | download-only-547000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
	| delete  | -p download-only-115000                                                  | download-only-115000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
	| start   | --download-only -p                                                       | binary-mirror-248000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | binary-mirror-248000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:55896                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-248000                                                  | binary-mirror-248000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
	| addons  | enable dashboard -p                                                      | addons-521000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | addons-521000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-521000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | addons-521000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-521000 --wait=true                                             | addons-521000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-521000                                                         | addons-521000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
	| start   | -p nospam-940000 -n=1 --memory=2250 --wait=false                         | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:19 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-940000                                                         | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
	| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
	|         | minikube-local-cache-test:functional-968000                              |                      |         |         |                     |                     |
	| cache   | functional-968000 cache delete                                           | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
	|         | minikube-local-cache-test:functional-968000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
	| ssh     | functional-968000 ssh sudo                                               | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-968000                                                        | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-968000 ssh                                                    | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-968000 cache reload                                           | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
	| ssh     | functional-968000 ssh                                                    | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-968000 kubectl --                                             | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
	|         | --context functional-968000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 17:19:26
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 17:19:26.629283   35395 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:19:26.629386   35395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:19:26.629388   35395 out.go:304] Setting ErrFile to fd 2...
	I0513 17:19:26.629394   35395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:19:26.629497   35395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:19:26.630609   35395 out.go:298] Setting JSON to false
	I0513 17:19:26.646306   35395 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26336,"bootTime":1715619630,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:19:26.646365   35395 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:19:26.651406   35395 out.go:177] * [functional-968000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:19:26.657338   35395 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:19:26.661378   35395 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:19:26.657390   35395 notify.go:220] Checking for updates...
	I0513 17:19:26.668322   35395 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:19:26.671379   35395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:19:26.674299   35395 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:19:26.677401   35395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:19:26.680635   35395 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:19:26.680689   35395 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:19:26.685338   35395 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:19:26.692258   35395 start.go:297] selected driver: qemu2
	I0513 17:19:26.692264   35395 start.go:901] validating driver "qemu2" against &{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-968000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:19:26.692316   35395 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:19:26.694481   35395 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:19:26.694500   35395 cni.go:84] Creating CNI manager for ""
	I0513 17:19:26.694506   35395 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:19:26.694543   35395 start.go:340] cluster config:
	{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-968000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:19:26.698534   35395 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:19:26.706341   35395 out.go:177] * Starting "functional-968000" primary control-plane node in "functional-968000" cluster
	I0513 17:19:26.710335   35395 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:19:26.710348   35395 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:19:26.710357   35395 cache.go:56] Caching tarball of preloaded images
	I0513 17:19:26.710410   35395 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:19:26.710414   35395 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:19:26.710475   35395 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/functional-968000/config.json ...
	I0513 17:19:26.710767   35395 start.go:360] acquireMachinesLock for functional-968000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:19:26.710796   35395 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "functional-968000"
	I0513 17:19:26.710803   35395 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:19:26.710806   35395 fix.go:54] fixHost starting: 
	I0513 17:19:26.710908   35395 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
	W0513 17:19:26.710914   35395 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:19:26.717355   35395 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
	I0513 17:19:26.721326   35395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:05:d4:05:ca:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/disk.qcow2
	I0513 17:19:26.723190   35395 main.go:141] libmachine: STDOUT: 
	I0513 17:19:26.723208   35395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:19:26.723240   35395 fix.go:56] duration metric: took 12.435084ms for fixHost
	I0513 17:19:26.723242   35395 start.go:83] releasing machines lock for "functional-968000", held for 12.4445ms
	W0513 17:19:26.723249   35395 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:19:26.723277   35395 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:19:26.723281   35395 start.go:728] Will try again in 5 seconds ...
	I0513 17:19:31.725463   35395 start.go:360] acquireMachinesLock for functional-968000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:19:31.726028   35395 start.go:364] duration metric: took 471.875µs to acquireMachinesLock for "functional-968000"
	I0513 17:19:31.726175   35395 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:19:31.726190   35395 fix.go:54] fixHost starting: 
	I0513 17:19:31.726930   35395 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
	W0513 17:19:31.726951   35395 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:19:31.730430   35395 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
	I0513 17:19:31.734504   35395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:05:d4:05:ca:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/disk.qcow2
	I0513 17:19:31.742535   35395 main.go:141] libmachine: STDOUT: 
	I0513 17:19:31.742597   35395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:19:31.742699   35395 fix.go:56] duration metric: took 16.50775ms for fixHost
	I0513 17:19:31.742711   35395 start.go:83] releasing machines lock for "functional-968000", held for 16.665709ms
	W0513 17:19:31.742897   35395 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:19:31.749327   35395 out.go:177] 
	W0513 17:19:31.753392   35395 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:19:31.753407   35395 out.go:239] * 
	W0513 17:19:31.755349   35395 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:19:31.763341   35395 out.go:177] 
	
	
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-968000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-547000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | -p download-only-547000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| delete  | -p download-only-547000                                                  | download-only-547000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| start   | -o=json --download-only                                                  | download-only-115000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | -p download-only-115000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| delete  | -p download-only-115000                                                  | download-only-115000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| delete  | -p download-only-547000                                                  | download-only-547000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| delete  | -p download-only-115000                                                  | download-only-115000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| start   | --download-only -p                                                       | binary-mirror-248000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | binary-mirror-248000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:55896                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-248000                                                  | binary-mirror-248000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| addons  | enable dashboard -p                                                      | addons-521000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | addons-521000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-521000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | addons-521000                                                            |                      |         |         |                     |                     |
| start   | -p addons-521000 --wait=true                                             | addons-521000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-521000                                                         | addons-521000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| start   | -p nospam-940000 -n=1 --memory=2250 --wait=false                         | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:19 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-940000                                                         | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | minikube-local-cache-test:functional-968000                              |                      |         |         |                     |                     |
| cache   | functional-968000 cache delete                                           | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | minikube-local-cache-test:functional-968000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
| ssh     | functional-968000 ssh sudo                                               | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-968000                                                        | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-968000 ssh                                                    | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-968000 cache reload                                           | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
| ssh     | functional-968000 ssh                                                    | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-968000 kubectl --                                             | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | --context functional-968000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/05/13 17:19:26
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0513 17:19:26.629283   35395 out.go:291] Setting OutFile to fd 1 ...
I0513 17:19:26.629386   35395 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:19:26.629388   35395 out.go:304] Setting ErrFile to fd 2...
I0513 17:19:26.629394   35395 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:19:26.629497   35395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
I0513 17:19:26.630609   35395 out.go:298] Setting JSON to false
I0513 17:19:26.646306   35395 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26336,"bootTime":1715619630,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0513 17:19:26.646365   35395 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0513 17:19:26.651406   35395 out.go:177] * [functional-968000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0513 17:19:26.657338   35395 out.go:177]   - MINIKUBE_LOCATION=18872
I0513 17:19:26.661378   35395 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
I0513 17:19:26.657390   35395 notify.go:220] Checking for updates...
I0513 17:19:26.668322   35395 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0513 17:19:26.671379   35395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0513 17:19:26.674299   35395 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
I0513 17:19:26.677401   35395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0513 17:19:26.680635   35395 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 17:19:26.680689   35395 driver.go:392] Setting default libvirt URI to qemu:///system
I0513 17:19:26.685338   35395 out.go:177] * Using the qemu2 driver based on existing profile
I0513 17:19:26.692258   35395 start.go:297] selected driver: qemu2
I0513 17:19:26.692264   35395 start.go:901] validating driver "qemu2" against &{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-968000 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0513 17:19:26.692316   35395 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0513 17:19:26.694481   35395 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0513 17:19:26.694500   35395 cni.go:84] Creating CNI manager for ""
I0513 17:19:26.694506   35395 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0513 17:19:26.694543   35395 start.go:340] cluster config:
{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-968000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0513 17:19:26.698534   35395 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0513 17:19:26.706341   35395 out.go:177] * Starting "functional-968000" primary control-plane node in "functional-968000" cluster
I0513 17:19:26.710335   35395 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0513 17:19:26.710348   35395 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
I0513 17:19:26.710357   35395 cache.go:56] Caching tarball of preloaded images
I0513 17:19:26.710410   35395 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0513 17:19:26.710414   35395 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0513 17:19:26.710475   35395 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/functional-968000/config.json ...
I0513 17:19:26.710767   35395 start.go:360] acquireMachinesLock for functional-968000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0513 17:19:26.710796   35395 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "functional-968000"
I0513 17:19:26.710803   35395 start.go:96] Skipping create...Using existing machine configuration
I0513 17:19:26.710806   35395 fix.go:54] fixHost starting: 
I0513 17:19:26.710908   35395 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
W0513 17:19:26.710914   35395 fix.go:138] unexpected machine state, will restart: <nil>
I0513 17:19:26.717355   35395 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
I0513 17:19:26.721326   35395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:05:d4:05:ca:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/disk.qcow2
I0513 17:19:26.723190   35395 main.go:141] libmachine: STDOUT: 
I0513 17:19:26.723208   35395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0513 17:19:26.723240   35395 fix.go:56] duration metric: took 12.435084ms for fixHost
I0513 17:19:26.723242   35395 start.go:83] releasing machines lock for "functional-968000", held for 12.4445ms
W0513 17:19:26.723249   35395 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0513 17:19:26.723277   35395 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0513 17:19:26.723281   35395 start.go:728] Will try again in 5 seconds ...
I0513 17:19:31.725463   35395 start.go:360] acquireMachinesLock for functional-968000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0513 17:19:31.726028   35395 start.go:364] duration metric: took 471.875µs to acquireMachinesLock for "functional-968000"
I0513 17:19:31.726175   35395 start.go:96] Skipping create...Using existing machine configuration
I0513 17:19:31.726190   35395 fix.go:54] fixHost starting: 
I0513 17:19:31.726930   35395 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
W0513 17:19:31.726951   35395 fix.go:138] unexpected machine state, will restart: <nil>
I0513 17:19:31.730430   35395 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
I0513 17:19:31.734504   35395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:05:d4:05:ca:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/disk.qcow2
I0513 17:19:31.742535   35395 main.go:141] libmachine: STDOUT: 
I0513 17:19:31.742597   35395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0513 17:19:31.742699   35395 fix.go:56] duration metric: took 16.50775ms for fixHost
I0513 17:19:31.742711   35395 start.go:83] releasing machines lock for "functional-968000", held for 16.665709ms
W0513 17:19:31.742897   35395 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0513 17:19:31.749327   35395 out.go:177] 
W0513 17:19:31.753392   35395 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0513 17:19:31.753407   35395 out.go:239] * 
W0513 17:19:31.755349   35395 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0513 17:19:31.763341   35395 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2591000856/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-547000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | -p download-only-547000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| delete  | -p download-only-547000                                                  | download-only-547000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| start   | -o=json --download-only                                                  | download-only-115000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | -p download-only-115000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| delete  | -p download-only-115000                                                  | download-only-115000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| delete  | -p download-only-547000                                                  | download-only-547000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| delete  | -p download-only-115000                                                  | download-only-115000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| start   | --download-only -p                                                       | binary-mirror-248000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | binary-mirror-248000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:55896                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-248000                                                  | binary-mirror-248000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| addons  | enable dashboard -p                                                      | addons-521000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | addons-521000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-521000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | addons-521000                                                            |                      |         |         |                     |                     |
| start   | -p addons-521000 --wait=true                                             | addons-521000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-521000                                                         | addons-521000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
| start   | -p nospam-940000 -n=1 --memory=2250 --wait=false                         | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:19 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-940000 --log_dir                                                  | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-940000                                                         | nospam-940000        | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-968000 cache add                                              | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | minikube-local-cache-test:functional-968000                              |                      |         |         |                     |                     |
| cache   | functional-968000 cache delete                                           | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | minikube-local-cache-test:functional-968000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
| ssh     | functional-968000 ssh sudo                                               | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-968000                                                        | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-968000 ssh                                                    | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-968000 cache reload                                           | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
| ssh     | functional-968000 ssh                                                    | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 13 May 24 17:19 PDT | 13 May 24 17:19 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-968000 kubectl --                                             | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | --context functional-968000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-968000                                                     | functional-968000    | jenkins | v1.33.1 | 13 May 24 17:19 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/05/13 17:19:26
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0513 17:19:26.629283   35395 out.go:291] Setting OutFile to fd 1 ...
I0513 17:19:26.629386   35395 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:19:26.629388   35395 out.go:304] Setting ErrFile to fd 2...
I0513 17:19:26.629394   35395 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:19:26.629497   35395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
I0513 17:19:26.630609   35395 out.go:298] Setting JSON to false
I0513 17:19:26.646306   35395 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26336,"bootTime":1715619630,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0513 17:19:26.646365   35395 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0513 17:19:26.651406   35395 out.go:177] * [functional-968000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0513 17:19:26.657338   35395 out.go:177]   - MINIKUBE_LOCATION=18872
I0513 17:19:26.661378   35395 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
I0513 17:19:26.657390   35395 notify.go:220] Checking for updates...
I0513 17:19:26.668322   35395 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0513 17:19:26.671379   35395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0513 17:19:26.674299   35395 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
I0513 17:19:26.677401   35395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0513 17:19:26.680635   35395 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 17:19:26.680689   35395 driver.go:392] Setting default libvirt URI to qemu:///system
I0513 17:19:26.685338   35395 out.go:177] * Using the qemu2 driver based on existing profile
I0513 17:19:26.692258   35395 start.go:297] selected driver: qemu2
I0513 17:19:26.692264   35395 start.go:901] validating driver "qemu2" against &{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-968000 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0513 17:19:26.692316   35395 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0513 17:19:26.694481   35395 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0513 17:19:26.694500   35395 cni.go:84] Creating CNI manager for ""
I0513 17:19:26.694506   35395 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0513 17:19:26.694543   35395 start.go:340] cluster config:
{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-968000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0513 17:19:26.698534   35395 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0513 17:19:26.706341   35395 out.go:177] * Starting "functional-968000" primary control-plane node in "functional-968000" cluster
I0513 17:19:26.710335   35395 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0513 17:19:26.710348   35395 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
I0513 17:19:26.710357   35395 cache.go:56] Caching tarball of preloaded images
I0513 17:19:26.710410   35395 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0513 17:19:26.710414   35395 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0513 17:19:26.710475   35395 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/functional-968000/config.json ...
I0513 17:19:26.710767   35395 start.go:360] acquireMachinesLock for functional-968000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0513 17:19:26.710796   35395 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "functional-968000"
I0513 17:19:26.710803   35395 start.go:96] Skipping create...Using existing machine configuration
I0513 17:19:26.710806   35395 fix.go:54] fixHost starting: 
I0513 17:19:26.710908   35395 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
W0513 17:19:26.710914   35395 fix.go:138] unexpected machine state, will restart: <nil>
I0513 17:19:26.717355   35395 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
I0513 17:19:26.721326   35395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:05:d4:05:ca:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/disk.qcow2
I0513 17:19:26.723190   35395 main.go:141] libmachine: STDOUT: 
I0513 17:19:26.723208   35395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0513 17:19:26.723240   35395 fix.go:56] duration metric: took 12.435084ms for fixHost
I0513 17:19:26.723242   35395 start.go:83] releasing machines lock for "functional-968000", held for 12.4445ms
W0513 17:19:26.723249   35395 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0513 17:19:26.723277   35395 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0513 17:19:26.723281   35395 start.go:728] Will try again in 5 seconds ...
I0513 17:19:31.725463   35395 start.go:360] acquireMachinesLock for functional-968000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0513 17:19:31.726028   35395 start.go:364] duration metric: took 471.875µs to acquireMachinesLock for "functional-968000"
I0513 17:19:31.726175   35395 start.go:96] Skipping create...Using existing machine configuration
I0513 17:19:31.726190   35395 fix.go:54] fixHost starting: 
I0513 17:19:31.726930   35395 fix.go:112] recreateIfNeeded on functional-968000: state=Stopped err=<nil>
W0513 17:19:31.726951   35395 fix.go:138] unexpected machine state, will restart: <nil>
I0513 17:19:31.730430   35395 out.go:177] * Restarting existing qemu2 VM for "functional-968000" ...
I0513 17:19:31.734504   35395 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:05:d4:05:ca:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/functional-968000/disk.qcow2
I0513 17:19:31.742535   35395 main.go:141] libmachine: STDOUT: 
I0513 17:19:31.742597   35395 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0513 17:19:31.742699   35395 fix.go:56] duration metric: took 16.50775ms for fixHost
I0513 17:19:31.742711   35395 start.go:83] releasing machines lock for "functional-968000", held for 16.665709ms
W0513 17:19:31.742897   35395 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-968000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0513 17:19:31.749327   35395 out.go:177] 
W0513 17:19:31.753392   35395 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0513 17:19:31.753407   35395 out.go:239] * 
W0513 17:19:31.755349   35395 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0513 17:19:31.763341   35395 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-968000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-968000 apply -f testdata/invalidsvc.yaml: exit status 1 (31.509625ms)

                                                
                                                
** stderr ** 
	error: context "functional-968000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-968000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-968000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-968000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-968000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-968000 --alsologtostderr -v=1] stderr:
I0513 17:20:15.973780   35725 out.go:291] Setting OutFile to fd 1 ...
I0513 17:20:15.974335   35725 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:20:15.974338   35725 out.go:304] Setting ErrFile to fd 2...
I0513 17:20:15.974341   35725 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:20:15.974482   35725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
I0513 17:20:15.974707   35725 mustload.go:65] Loading cluster: functional-968000
I0513 17:20:15.974883   35725 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 17:20:15.978963   35725 out.go:177] * The control-plane node functional-968000 host is not running: state=Stopped
I0513 17:20:15.981998   35725 out.go:177]   To start a cluster, run: "minikube start -p functional-968000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (40.676291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 status: exit status 7 (28.822875ms)

                                                
                                                
-- stdout --
	functional-968000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-968000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (29.109583ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-968000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 status -o json: exit status 7 (29.322292ms)

                                                
                                                
-- stdout --
	{"Name":"functional-968000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-968000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (29.080084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-968000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-968000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.723459ms)

                                                
                                                
** stderr ** 
	error: context "functional-968000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-968000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-968000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-968000 describe po hello-node-connect: exit status 1 (26.276667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-968000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-968000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-968000 logs -l app=hello-node-connect: exit status 1 (26.774333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-968000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-968000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-968000 describe svc hello-node-connect: exit status 1 (26.262875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-968000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (29.226458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-968000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (29.456167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "echo hello": exit status 83 (44.541833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n"*. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "cat /etc/hostname": exit status 83 (50.998125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-968000"- but got *"* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n"*. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (32.899167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (52.418167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-968000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.879209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-968000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-968000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cp functional-968000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1192730938/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 cp functional-968000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1192730938/001/cp-test.txt: exit status 83 (39.740083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-968000 cp functional-968000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1192730938/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 "sudo cat /home/docker/cp-test.txt": exit status 83 (38.783833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1192730938/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (47.734458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-968000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (38.027208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-968000 ssh -n functional-968000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-968000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-968000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/35055/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/test/nested/copy/35055/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/test/nested/copy/35055/hosts": exit status 83 (39.7255ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/test/nested/copy/35055/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-968000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-968000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (29.103958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/35055.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/35055.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/35055.pem": exit status 83 (40.46925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/35055.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo cat /etc/ssl/certs/35055.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/35055.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-968000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-968000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/35055.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /usr/share/ca-certificates/35055.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /usr/share/ca-certificates/35055.pem": exit status 83 (39.750917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/35055.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo cat /usr/share/ca-certificates/35055.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/35055.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-968000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-968000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (41.531375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-968000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-968000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/350552.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/350552.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/350552.pem": exit status 83 (37.822708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/350552.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo cat /etc/ssl/certs/350552.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/350552.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-968000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-968000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/350552.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /usr/share/ca-certificates/350552.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /usr/share/ca-certificates/350552.pem": exit status 83 (40.668541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/350552.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo cat /usr/share/ca-certificates/350552.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/350552.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-968000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-968000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (42.763625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-968000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-968000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (28.767833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-968000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-968000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.460333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-968000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-968000 -n functional-968000: exit status 7 (30.545042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-968000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo systemctl is-active crio": exit status 83 (47.534958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 version -o=json --components: exit status 83 (41.928916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-968000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-968000 image ls --format short --alsologtostderr:
I0513 17:20:16.369230   35740 out.go:291] Setting OutFile to fd 1 ...
I0513 17:20:16.369379   35740 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:20:16.369382   35740 out.go:304] Setting ErrFile to fd 2...
I0513 17:20:16.369385   35740 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:20:16.369510   35740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
I0513 17:20:16.369917   35740 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 17:20:16.369974   35740 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-968000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-968000 image ls --format table --alsologtostderr:
I0513 17:20:16.586365   35752 out.go:291] Setting OutFile to fd 1 ...
I0513 17:20:16.586528   35752 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:20:16.586531   35752 out.go:304] Setting ErrFile to fd 2...
I0513 17:20:16.586532   35752 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:20:16.586670   35752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
I0513 17:20:16.587085   35752 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 17:20:16.587145   35752 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-968000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-968000 image ls --format json --alsologtostderr:
I0513 17:20:16.550851   35750 out.go:291] Setting OutFile to fd 1 ...
I0513 17:20:16.551032   35750 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:20:16.551035   35750 out.go:304] Setting ErrFile to fd 2...
I0513 17:20:16.551038   35750 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:20:16.551160   35750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
I0513 17:20:16.551570   35750 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 17:20:16.551635   35750 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-968000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-968000 image ls --format yaml --alsologtostderr:
I0513 17:20:16.515787   35748 out.go:291] Setting OutFile to fd 1 ...
I0513 17:20:16.515919   35748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:20:16.515923   35748 out.go:304] Setting ErrFile to fd 2...
I0513 17:20:16.515925   35748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:20:16.516062   35748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
I0513 17:20:16.516455   35748 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 17:20:16.516517   35748 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh pgrep buildkitd: exit status 83 (40.746917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image build -t localhost/my-image:functional-968000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-968000 image build -t localhost/my-image:functional-968000 testdata/build --alsologtostderr:
I0513 17:20:16.444889   35744 out.go:291] Setting OutFile to fd 1 ...
I0513 17:20:16.445285   35744 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:20:16.445288   35744 out.go:304] Setting ErrFile to fd 2...
I0513 17:20:16.445291   35744 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:20:16.445465   35744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
I0513 17:20:16.445889   35744 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 17:20:16.446354   35744 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 17:20:16.446595   35744 build_images.go:133] succeeded building to: 
I0513 17:20:16.446599   35744 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls
functional_test.go:442: expected "localhost/my-image:functional-968000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-968000 docker-env) && out/minikube-darwin-arm64 status -p functional-968000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-968000 docker-env) && out/minikube-darwin-arm64 status -p functional-968000": exit status 1 (44.791917ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2: exit status 83 (39.663167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:20:16.245464   35734 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:20:16.245862   35734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:20:16.245866   35734 out.go:304] Setting ErrFile to fd 2...
	I0513 17:20:16.245869   35734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:20:16.246025   35734 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:20:16.246222   35734 mustload.go:65] Loading cluster: functional-968000
	I0513 17:20:16.246416   35734 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:20:16.249767   35734 out.go:177] * The control-plane node functional-968000 host is not running: state=Stopped
	I0513 17:20:16.253737   35734 out.go:177]   To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2: exit status 83 (41.267708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:20:16.327565   35738 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:20:16.327727   35738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:20:16.327731   35738 out.go:304] Setting ErrFile to fd 2...
	I0513 17:20:16.327733   35738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:20:16.327863   35738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:20:16.328089   35738 mustload.go:65] Loading cluster: functional-968000
	I0513 17:20:16.328283   35738 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:20:16.332782   35738 out.go:177] * The control-plane node functional-968000 host is not running: state=Stopped
	I0513 17:20:16.336769   35738 out.go:177]   To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2: exit status 83 (41.774ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:20:16.285275   35736 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:20:16.285396   35736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:20:16.285399   35736 out.go:304] Setting ErrFile to fd 2...
	I0513 17:20:16.285402   35736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:20:16.285527   35736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:20:16.285755   35736 mustload.go:65] Loading cluster: functional-968000
	I0513 17:20:16.285940   35736 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:20:16.290784   35736 out.go:177] * The control-plane node functional-968000 host is not running: state=Stopped
	I0513 17:20:16.294647   35736 out.go:177]   To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-968000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-968000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-968000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.805292ms)

                                                
                                                
** stderr ** 
	error: context "functional-968000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-968000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 service list: exit status 83 (42.416292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-968000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 service list -o json: exit status 83 (38.822333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-968000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 service --namespace=default --https --url hello-node: exit status 83 (41.805917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-968000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 service hello-node --url --format={{.IP}}: exit status 83 (38.843958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-968000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 service hello-node --url: exit status 83 (44.741625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-968000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test.go:1565: failed to parse "* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"": parse "* The control-plane node functional-968000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-968000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0513 17:19:33.500009   35514 out.go:291] Setting OutFile to fd 1 ...
I0513 17:19:33.500205   35514 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:19:33.500207   35514 out.go:304] Setting ErrFile to fd 2...
I0513 17:19:33.500210   35514 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:19:33.500345   35514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
I0513 17:19:33.500601   35514 mustload.go:65] Loading cluster: functional-968000
I0513 17:19:33.500817   35514 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 17:19:33.504111   35514 out.go:177] * The control-plane node functional-968000 host is not running: state=Stopped
I0513 17:19:33.511112   35514 out.go:177]   To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
stdout: * The control-plane node functional-968000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-968000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 35515: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-968000": client config: context "functional-968000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (86.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-968000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-968000 get svc nginx-svc: exit status 1 (71.038125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-968000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-968000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (86.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image load --daemon gcr.io/google-containers/addon-resizer:functional-968000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-968000 image load --daemon gcr.io/google-containers/addon-resizer:functional-968000 --alsologtostderr: (1.306730083s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-968000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image load --daemon gcr.io/google-containers/addon-resizer:functional-968000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-968000 image load --daemon gcr.io/google-containers/addon-resizer:functional-968000 --alsologtostderr: (1.2907915s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-968000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.487244292s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-968000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image load --daemon gcr.io/google-containers/addon-resizer:functional-968000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-968000 image load --daemon gcr.io/google-containers/addon-resizer:functional-968000 --alsologtostderr: (1.285548167s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-968000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image save gcr.io/google-containers/addon-resizer:functional-968000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-968000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.025386042s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 15 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (34.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (34.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-906000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-906000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.056829667s)

                                                
                                                
-- stdout --
	* [ha-906000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-906000" primary control-plane node in "ha-906000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-906000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:22:00.751639   35800 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:22:00.751764   35800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:22:00.751767   35800 out.go:304] Setting ErrFile to fd 2...
	I0513 17:22:00.751770   35800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:22:00.751900   35800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:22:00.752929   35800 out.go:298] Setting JSON to false
	I0513 17:22:00.769242   35800 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26490,"bootTime":1715619630,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:22:00.769306   35800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:22:00.775885   35800 out.go:177] * [ha-906000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:22:00.783810   35800 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:22:00.783863   35800 notify.go:220] Checking for updates...
	I0513 17:22:00.787826   35800 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:22:00.790832   35800 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:22:00.793866   35800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:22:00.795300   35800 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:22:00.797783   35800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:22:00.801057   35800 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:22:00.805691   35800 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:22:00.812786   35800 start.go:297] selected driver: qemu2
	I0513 17:22:00.812792   35800 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:22:00.812799   35800 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:22:00.814971   35800 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:22:00.817889   35800 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:22:00.820817   35800 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:22:00.820836   35800 cni.go:84] Creating CNI manager for ""
	I0513 17:22:00.820840   35800 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0513 17:22:00.820844   35800 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0513 17:22:00.820886   35800 start.go:340] cluster config:
	{Name:ha-906000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:
cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/so
cket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:22:00.825427   35800 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:22:00.831739   35800 out.go:177] * Starting "ha-906000" primary control-plane node in "ha-906000" cluster
	I0513 17:22:00.835771   35800 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:22:00.835792   35800 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:22:00.835802   35800 cache.go:56] Caching tarball of preloaded images
	I0513 17:22:00.835869   35800 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:22:00.835875   35800 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:22:00.836101   35800 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/ha-906000/config.json ...
	I0513 17:22:00.836112   35800 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/ha-906000/config.json: {Name:mk9b4e8b7844ef7e0aae603b1c1cdf05ef3bed1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:22:00.836482   35800 start.go:360] acquireMachinesLock for ha-906000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:22:00.836514   35800 start.go:364] duration metric: took 26.542µs to acquireMachinesLock for "ha-906000"
	I0513 17:22:00.836525   35800 start.go:93] Provisioning new machine with config: &{Name:ha-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-906000 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:22:00.836562   35800 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:22:00.844844   35800 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:22:00.861497   35800 start.go:159] libmachine.API.Create for "ha-906000" (driver="qemu2")
	I0513 17:22:00.861526   35800 client.go:168] LocalClient.Create starting
	I0513 17:22:00.861580   35800 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:22:00.861609   35800 main.go:141] libmachine: Decoding PEM data...
	I0513 17:22:00.861618   35800 main.go:141] libmachine: Parsing certificate...
	I0513 17:22:00.861656   35800 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:22:00.861678   35800 main.go:141] libmachine: Decoding PEM data...
	I0513 17:22:00.861684   35800 main.go:141] libmachine: Parsing certificate...
	I0513 17:22:00.862192   35800 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:22:01.230999   35800 main.go:141] libmachine: Creating SSH key...
	I0513 17:22:01.307298   35800 main.go:141] libmachine: Creating Disk image...
	I0513 17:22:01.307303   35800 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:22:01.307492   35800 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2
	I0513 17:22:01.328614   35800 main.go:141] libmachine: STDOUT: 
	I0513 17:22:01.328731   35800 main.go:141] libmachine: STDERR: 
	I0513 17:22:01.328792   35800 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2 +20000M
	I0513 17:22:01.339985   35800 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:22:01.340009   35800 main.go:141] libmachine: STDERR: 
	I0513 17:22:01.340028   35800 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2
	I0513 17:22:01.340030   35800 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:22:01.340057   35800 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:75:fe:75:42:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2
	I0513 17:22:01.341726   35800 main.go:141] libmachine: STDOUT: 
	I0513 17:22:01.341744   35800 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:22:01.341764   35800 client.go:171] duration metric: took 480.236625ms to LocalClient.Create
	I0513 17:22:03.343923   35800 start.go:128] duration metric: took 2.507354708s to createHost
	I0513 17:22:03.343980   35800 start.go:83] releasing machines lock for "ha-906000", held for 2.507476041s
	W0513 17:22:03.344078   35800 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:22:03.355231   35800 out.go:177] * Deleting "ha-906000" in qemu2 ...
	W0513 17:22:03.388910   35800 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:22:03.388943   35800 start.go:728] Will try again in 5 seconds ...
	I0513 17:22:08.391280   35800 start.go:360] acquireMachinesLock for ha-906000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:22:08.391737   35800 start.go:364] duration metric: took 340.208µs to acquireMachinesLock for "ha-906000"
	I0513 17:22:08.391869   35800 start.go:93] Provisioning new machine with config: &{Name:ha-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-906000 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:22:08.392158   35800 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:22:08.401717   35800 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:22:08.451060   35800 start.go:159] libmachine.API.Create for "ha-906000" (driver="qemu2")
	I0513 17:22:08.451120   35800 client.go:168] LocalClient.Create starting
	I0513 17:22:08.451226   35800 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:22:08.451302   35800 main.go:141] libmachine: Decoding PEM data...
	I0513 17:22:08.451317   35800 main.go:141] libmachine: Parsing certificate...
	I0513 17:22:08.451384   35800 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:22:08.451428   35800 main.go:141] libmachine: Decoding PEM data...
	I0513 17:22:08.451440   35800 main.go:141] libmachine: Parsing certificate...
	I0513 17:22:08.452135   35800 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:22:08.608274   35800 main.go:141] libmachine: Creating SSH key...
	I0513 17:22:08.710787   35800 main.go:141] libmachine: Creating Disk image...
	I0513 17:22:08.710795   35800 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:22:08.710983   35800 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2
	I0513 17:22:08.723371   35800 main.go:141] libmachine: STDOUT: 
	I0513 17:22:08.723396   35800 main.go:141] libmachine: STDERR: 
	I0513 17:22:08.723452   35800 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2 +20000M
	I0513 17:22:08.734191   35800 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:22:08.734208   35800 main.go:141] libmachine: STDERR: 
	I0513 17:22:08.734217   35800 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2
	I0513 17:22:08.734226   35800 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:22:08.734259   35800 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:55:20:a3:e5:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2
	I0513 17:22:08.735926   35800 main.go:141] libmachine: STDOUT: 
	I0513 17:22:08.735960   35800 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:22:08.735973   35800 client.go:171] duration metric: took 284.849208ms to LocalClient.Create
	I0513 17:22:10.738127   35800 start.go:128] duration metric: took 2.3459445s to createHost
	I0513 17:22:10.738651   35800 start.go:83] releasing machines lock for "ha-906000", held for 2.346858583s
	W0513 17:22:10.739015   35800 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-906000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-906000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:22:10.747593   35800 out.go:177] 
	W0513 17:22:10.754685   35800 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:22:10.754717   35800 out.go:239] * 
	* 
	W0513 17:22:10.757212   35800 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:22:10.767636   35800 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-906000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (68.230084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (112.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.058958ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-906000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- rollout status deployment/busybox: exit status 1 (55.500333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.080625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.84375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.53425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.614792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.66875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.152084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.62675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.340708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.898291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.051375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.904125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.967583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.663083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.297125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.066875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (29.0145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (112.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.487833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-906000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (29.122541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-906000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-906000 -v=7 --alsologtostderr: exit status 83 (43.13425ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-906000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-906000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:03.424384   35900 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:03.425037   35900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:03.425051   35900 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:03.425054   35900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:03.425234   35900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:03.425486   35900 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:03.425685   35900 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:03.430553   35900 out.go:177] * The control-plane node ha-906000 host is not running: state=Stopped
	I0513 17:24:03.434384   35900 out.go:177]   To start a cluster, run: "minikube start -p ha-906000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-906000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (29.018458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-906000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-906000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.161833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-906000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-906000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-906000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (28.909792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-906000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-906000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-906000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":f
alse,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-906000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":tr
ue}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":600000
00000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-906000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-906000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-906000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDri
verMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-906000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,
\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInte
rval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (29.123792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 status --output json -v=7 --alsologtostderr: exit status 7 (28.9135ms)

                                                
                                                
-- stdout --
	{"Name":"ha-906000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:03.650693   35913 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:03.650844   35913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:03.650847   35913 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:03.650849   35913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:03.650995   35913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:03.651127   35913 out.go:298] Setting JSON to true
	I0513 17:24:03.651138   35913 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:03.651202   35913 notify.go:220] Checking for updates...
	I0513 17:24:03.651322   35913 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:03.651326   35913 status.go:255] checking status of ha-906000 ...
	I0513 17:24:03.651555   35913 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0513 17:24:03.651559   35913 status.go:343] host is not running, skipping remaining checks
	I0513 17:24:03.651561   35913 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-906000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (28.981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 node stop m02 -v=7 --alsologtostderr: exit status 85 (47.866708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:03.709667   35917 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:03.710243   35917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:03.710246   35917 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:03.710248   35917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:03.710387   35917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:03.710608   35917 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:03.710814   35917 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:03.715068   35917 out.go:177] 
	W0513 17:24:03.718040   35917 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0513 17:24:03.718045   35917 out.go:239] * 
	* 
	W0513 17:24:03.720961   35917 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:24:03.725123   35917 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-906000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (29.185875ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:03.757463   35919 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:03.757623   35919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:03.757625   35919 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:03.757628   35919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:03.757751   35919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:03.757885   35919 out.go:298] Setting JSON to false
	I0513 17:24:03.757896   35919 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:03.757957   35919 notify.go:220] Checking for updates...
	I0513 17:24:03.758105   35919 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:03.758109   35919 status.go:255] checking status of ha-906000 ...
	I0513 17:24:03.758334   35919 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0513 17:24:03.758338   35919 status.go:343] host is not running, skipping remaining checks
	I0513 17:24:03.758340   35919 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr": ha-906000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr": ha-906000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr": ha-906000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr": ha-906000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (29.262459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-906000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-906000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-906000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"Disable
DriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-906000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":tr
ue,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseI
nterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (28.586875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 node start m02 -v=7 --alsologtostderr: exit status 85 (48.036916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:03.913171   35929 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:03.913734   35929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:03.913738   35929 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:03.913740   35929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:03.913881   35929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:03.914112   35929 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:03.914315   35929 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:03.918062   35929 out.go:177] 
	W0513 17:24:03.922082   35929 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0513 17:24:03.922087   35929 out.go:239] * 
	* 
	W0513 17:24:03.924773   35929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:24:03.929019   35929 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0513 17:24:03.913171   35929 out.go:291] Setting OutFile to fd 1 ...
I0513 17:24:03.913734   35929 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:24:03.913738   35929 out.go:304] Setting ErrFile to fd 2...
I0513 17:24:03.913740   35929 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:24:03.913881   35929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
I0513 17:24:03.914112   35929 mustload.go:65] Loading cluster: ha-906000
I0513 17:24:03.914315   35929 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 17:24:03.918062   35929 out.go:177] 
W0513 17:24:03.922082   35929 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0513 17:24:03.922087   35929 out.go:239] * 
* 
W0513 17:24:03.924773   35929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0513 17:24:03.929019   35929 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-906000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (29.242833ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:03.961706   35931 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:03.961866   35931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:03.961869   35931 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:03.961871   35931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:03.962009   35931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:03.962134   35931 out.go:298] Setting JSON to false
	I0513 17:24:03.962145   35931 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:03.962203   35931 notify.go:220] Checking for updates...
	I0513 17:24:03.962357   35931 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:03.962362   35931 status.go:255] checking status of ha-906000 ...
	I0513 17:24:03.962565   35931 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0513 17:24:03.962568   35931 status.go:343] host is not running, skipping remaining checks
	I0513 17:24:03.962570   35931 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (74.353084ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:04.935746   35933 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:04.935994   35933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:04.935998   35933 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:04.936001   35933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:04.936193   35933 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:04.936362   35933 out.go:298] Setting JSON to false
	I0513 17:24:04.936378   35933 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:04.936415   35933 notify.go:220] Checking for updates...
	I0513 17:24:04.936680   35933 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:04.936690   35933 status.go:255] checking status of ha-906000 ...
	I0513 17:24:04.936973   35933 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0513 17:24:04.936978   35933 status.go:343] host is not running, skipping remaining checks
	I0513 17:24:04.936982   35933 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (72.063083ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:06.311958   35935 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:06.312177   35935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:06.312181   35935 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:06.312185   35935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:06.312358   35935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:06.312525   35935 out.go:298] Setting JSON to false
	I0513 17:24:06.312538   35935 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:06.312582   35935 notify.go:220] Checking for updates...
	I0513 17:24:06.312803   35935 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:06.312814   35935 status.go:255] checking status of ha-906000 ...
	I0513 17:24:06.313093   35935 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0513 17:24:06.313097   35935 status.go:343] host is not running, skipping remaining checks
	I0513 17:24:06.313100   35935 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (72.976458ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:09.369563   35937 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:09.369753   35937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:09.369757   35937 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:09.369761   35937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:09.369920   35937 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:09.370091   35937 out.go:298] Setting JSON to false
	I0513 17:24:09.370106   35937 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:09.370147   35937 notify.go:220] Checking for updates...
	I0513 17:24:09.370386   35937 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:09.370392   35937 status.go:255] checking status of ha-906000 ...
	I0513 17:24:09.370681   35937 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0513 17:24:09.370686   35937 status.go:343] host is not running, skipping remaining checks
	I0513 17:24:09.370689   35937 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (75.23175ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:12.372975   35939 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:12.373209   35939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:12.373213   35939 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:12.373217   35939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:12.373399   35939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:12.373598   35939 out.go:298] Setting JSON to false
	I0513 17:24:12.373615   35939 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:12.373663   35939 notify.go:220] Checking for updates...
	I0513 17:24:12.373911   35939 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:12.373922   35939 status.go:255] checking status of ha-906000 ...
	I0513 17:24:12.374254   35939 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0513 17:24:12.374259   35939 status.go:343] host is not running, skipping remaining checks
	I0513 17:24:12.374262   35939 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (71.641583ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:18.047314   35941 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:18.047532   35941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:18.047537   35941 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:18.047541   35941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:18.047713   35941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:18.047873   35941 out.go:298] Setting JSON to false
	I0513 17:24:18.047890   35941 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:18.047934   35941 notify.go:220] Checking for updates...
	I0513 17:24:18.048167   35941 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:18.048173   35941 status.go:255] checking status of ha-906000 ...
	I0513 17:24:18.048481   35941 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0513 17:24:18.048486   35941 status.go:343] host is not running, skipping remaining checks
	I0513 17:24:18.048489   35941 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (74.850333ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:28.980795   35946 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:28.981032   35946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:28.981037   35946 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:28.981041   35946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:28.981226   35946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:28.981405   35946 out.go:298] Setting JSON to false
	I0513 17:24:28.981422   35946 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:28.981462   35946 notify.go:220] Checking for updates...
	I0513 17:24:28.981699   35946 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:28.981706   35946 status.go:255] checking status of ha-906000 ...
	I0513 17:24:28.981994   35946 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0513 17:24:28.981999   35946 status.go:343] host is not running, skipping remaining checks
	I0513 17:24:28.982002   35946 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (73.601292ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:36.611591   35948 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:36.611797   35948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:36.611802   35948 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:36.611805   35948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:36.611978   35948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:36.612164   35948 out.go:298] Setting JSON to false
	I0513 17:24:36.612178   35948 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:36.612224   35948 notify.go:220] Checking for updates...
	I0513 17:24:36.612469   35948 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:36.612476   35948 status.go:255] checking status of ha-906000 ...
	I0513 17:24:36.612820   35948 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0513 17:24:36.612825   35948 status.go:343] host is not running, skipping remaining checks
	I0513 17:24:36.612828   35948 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (72.674292ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:47.231272   35953 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:47.231502   35953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:47.231506   35953 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:47.231509   35953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:47.231678   35953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:47.231821   35953 out.go:298] Setting JSON to false
	I0513 17:24:47.231836   35953 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:47.231876   35953 notify.go:220] Checking for updates...
	I0513 17:24:47.232112   35953 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:47.232118   35953 status.go:255] checking status of ha-906000 ...
	I0513 17:24:47.232444   35953 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0513 17:24:47.232449   35953 status.go:343] host is not running, skipping remaining checks
	I0513 17:24:47.232452   35953 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (32.361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (43.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-906000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-906000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-906000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":f
alse,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-906000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":tr
ue}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":600000
00000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-906000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-906000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-906000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDri
verMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-906000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,
\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInte
rval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (29.057708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-906000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-906000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-906000 -v=7 --alsologtostderr: (2.930468s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-906000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-906000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.219732s)

                                                
                                                
-- stdout --
	* [ha-906000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-906000" primary control-plane node in "ha-906000" cluster
	* Restarting existing qemu2 VM for "ha-906000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-906000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:50.387800   35983 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:50.387964   35983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:50.387968   35983 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:50.387971   35983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:50.388148   35983 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:50.389366   35983 out.go:298] Setting JSON to false
	I0513 17:24:50.408635   35983 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26660,"bootTime":1715619630,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:24:50.408726   35983 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:24:50.412502   35983 out.go:177] * [ha-906000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:24:50.419326   35983 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:24:50.422328   35983 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:24:50.419386   35983 notify.go:220] Checking for updates...
	I0513 17:24:50.425245   35983 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:24:50.428288   35983 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:24:50.431342   35983 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:24:50.434295   35983 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:24:50.437642   35983 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:50.437720   35983 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:24:50.442288   35983 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:24:50.449261   35983 start.go:297] selected driver: qemu2
	I0513 17:24:50.449270   35983 start.go:901] validating driver "qemu2" against &{Name:ha-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-906000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:24:50.449353   35983 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:24:50.451644   35983 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:24:50.451690   35983 cni.go:84] Creating CNI manager for ""
	I0513 17:24:50.451695   35983 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0513 17:24:50.451745   35983 start.go:340] cluster config:
	{Name:ha-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:24:50.456056   35983 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:24:50.464204   35983 out.go:177] * Starting "ha-906000" primary control-plane node in "ha-906000" cluster
	I0513 17:24:50.468337   35983 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:24:50.468356   35983 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:24:50.468366   35983 cache.go:56] Caching tarball of preloaded images
	I0513 17:24:50.468426   35983 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:24:50.468432   35983 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:24:50.468496   35983 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/ha-906000/config.json ...
	I0513 17:24:50.468974   35983 start.go:360] acquireMachinesLock for ha-906000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:24:50.469012   35983 start.go:364] duration metric: took 31.291µs to acquireMachinesLock for "ha-906000"
	I0513 17:24:50.469027   35983 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:24:50.469032   35983 fix.go:54] fixHost starting: 
	I0513 17:24:50.469163   35983 fix.go:112] recreateIfNeeded on ha-906000: state=Stopped err=<nil>
	W0513 17:24:50.469174   35983 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:24:50.477318   35983 out.go:177] * Restarting existing qemu2 VM for "ha-906000" ...
	I0513 17:24:50.481193   35983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:55:20:a3:e5:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2
	I0513 17:24:50.483594   35983 main.go:141] libmachine: STDOUT: 
	I0513 17:24:50.483617   35983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:24:50.483648   35983 fix.go:56] duration metric: took 14.61575ms for fixHost
	I0513 17:24:50.483653   35983 start.go:83] releasing machines lock for "ha-906000", held for 14.636292ms
	W0513 17:24:50.483660   35983 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:24:50.483705   35983 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:24:50.483710   35983 start.go:728] Will try again in 5 seconds ...
	I0513 17:24:55.485870   35983 start.go:360] acquireMachinesLock for ha-906000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:24:55.486261   35983 start.go:364] duration metric: took 296.042µs to acquireMachinesLock for "ha-906000"
	I0513 17:24:55.486401   35983 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:24:55.486420   35983 fix.go:54] fixHost starting: 
	I0513 17:24:55.487069   35983 fix.go:112] recreateIfNeeded on ha-906000: state=Stopped err=<nil>
	W0513 17:24:55.487098   35983 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:24:55.491516   35983 out.go:177] * Restarting existing qemu2 VM for "ha-906000" ...
	I0513 17:24:55.499772   35983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:55:20:a3:e5:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2
	I0513 17:24:55.508898   35983 main.go:141] libmachine: STDOUT: 
	I0513 17:24:55.508948   35983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:24:55.509007   35983 fix.go:56] duration metric: took 22.587542ms for fixHost
	I0513 17:24:55.509020   35983 start.go:83] releasing machines lock for "ha-906000", held for 22.740375ms
	W0513 17:24:55.509196   35983 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-906000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-906000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:24:55.516359   35983 out.go:177] 
	W0513 17:24:55.520490   35983 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:24:55.520514   35983 out.go:239] * 
	* 
	W0513 17:24:55.523329   35983 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:24:55.530412   35983 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-906000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-906000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (31.832125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.619958ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-906000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-906000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:55.671789   35995 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:55.672334   35995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:55.672338   35995 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:55.672341   35995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:55.672512   35995 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:55.672721   35995 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:55.672902   35995 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:55.677367   35995 out.go:177] * The control-plane node ha-906000 host is not running: state=Stopped
	I0513 17:24:55.681315   35995 out.go:177]   To start a cluster, run: "minikube start -p ha-906000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-906000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (29.104667ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:55.713621   35997 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:55.713799   35997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:55.713802   35997 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:55.713804   35997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:55.713933   35997 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:55.714073   35997 out.go:298] Setting JSON to false
	I0513 17:24:55.714085   35997 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:55.714140   35997 notify.go:220] Checking for updates...
	I0513 17:24:55.714294   35997 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:55.714298   35997 status.go:255] checking status of ha-906000 ...
	I0513 17:24:55.714500   35997 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0513 17:24:55.714504   35997 status.go:343] host is not running, skipping remaining checks
	I0513 17:24:55.714506   35997 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (28.954917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-906000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-906000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-906000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"Disable
DriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-906000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":tr
ue,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseI
nterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (28.733541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-906000 stop -v=7 --alsologtostderr: (3.521490583s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (69.989667ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:59.430946   36027 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:59.431158   36027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:59.431162   36027 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:59.431165   36027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:59.431313   36027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:59.431471   36027 out.go:298] Setting JSON to false
	I0513 17:24:59.431484   36027 mustload.go:65] Loading cluster: ha-906000
	I0513 17:24:59.431523   36027 notify.go:220] Checking for updates...
	I0513 17:24:59.431764   36027 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:59.431770   36027 status.go:255] checking status of ha-906000 ...
	I0513 17:24:59.432042   36027 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0513 17:24:59.432046   36027 status.go:343] host is not running, skipping remaining checks
	I0513 17:24:59.432049   36027 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr": ha-906000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr": ha-906000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-906000 status -v=7 --alsologtostderr": ha-906000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (31.901583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-906000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-906000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.176760334s)

                                                
                                                
-- stdout --
	* [ha-906000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-906000" primary control-plane node in "ha-906000" cluster
	* Restarting existing qemu2 VM for "ha-906000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-906000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:24:59.491674   36031 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:24:59.491816   36031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:59.491820   36031 out.go:304] Setting ErrFile to fd 2...
	I0513 17:24:59.491822   36031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:24:59.491946   36031 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:24:59.492905   36031 out.go:298] Setting JSON to false
	I0513 17:24:59.508823   36031 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26669,"bootTime":1715619630,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:24:59.508893   36031 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:24:59.512953   36031 out.go:177] * [ha-906000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:24:59.519997   36031 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:24:59.523923   36031 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:24:59.520048   36031 notify.go:220] Checking for updates...
	I0513 17:24:59.527898   36031 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:24:59.530950   36031 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:24:59.533866   36031 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:24:59.536913   36031 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:24:59.540225   36031 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:24:59.540515   36031 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:24:59.544867   36031 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:24:59.551891   36031 start.go:297] selected driver: qemu2
	I0513 17:24:59.551898   36031 start.go:901] validating driver "qemu2" against &{Name:ha-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-906000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:24:59.551949   36031 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:24:59.554063   36031 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:24:59.554087   36031 cni.go:84] Creating CNI manager for ""
	I0513 17:24:59.554091   36031 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0513 17:24:59.554134   36031 start.go:340] cluster config:
	{Name:ha-906000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-906000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:24:59.558087   36031 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:24:59.565902   36031 out.go:177] * Starting "ha-906000" primary control-plane node in "ha-906000" cluster
	I0513 17:24:59.569736   36031 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:24:59.569751   36031 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:24:59.569761   36031 cache.go:56] Caching tarball of preloaded images
	I0513 17:24:59.569814   36031 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:24:59.569820   36031 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:24:59.569891   36031 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/ha-906000/config.json ...
	I0513 17:24:59.570351   36031 start.go:360] acquireMachinesLock for ha-906000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:24:59.570381   36031 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "ha-906000"
	I0513 17:24:59.570390   36031 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:24:59.570405   36031 fix.go:54] fixHost starting: 
	I0513 17:24:59.570517   36031 fix.go:112] recreateIfNeeded on ha-906000: state=Stopped err=<nil>
	W0513 17:24:59.570524   36031 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:24:59.578919   36031 out.go:177] * Restarting existing qemu2 VM for "ha-906000" ...
	I0513 17:24:59.582896   36031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:55:20:a3:e5:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2
	I0513 17:24:59.584833   36031 main.go:141] libmachine: STDOUT: 
	I0513 17:24:59.584855   36031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:24:59.584880   36031 fix.go:56] duration metric: took 14.485084ms for fixHost
	I0513 17:24:59.584884   36031 start.go:83] releasing machines lock for "ha-906000", held for 14.498708ms
	W0513 17:24:59.584890   36031 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:24:59.584917   36031 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:24:59.584922   36031 start.go:728] Will try again in 5 seconds ...
	I0513 17:25:04.586985   36031 start.go:360] acquireMachinesLock for ha-906000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:25:04.587399   36031 start.go:364] duration metric: took 332.917µs to acquireMachinesLock for "ha-906000"
	I0513 17:25:04.587561   36031 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:25:04.587579   36031 fix.go:54] fixHost starting: 
	I0513 17:25:04.588392   36031 fix.go:112] recreateIfNeeded on ha-906000: state=Stopped err=<nil>
	W0513 17:25:04.588420   36031 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:25:04.592821   36031 out.go:177] * Restarting existing qemu2 VM for "ha-906000" ...
	I0513 17:25:04.597028   36031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:55:20:a3:e5:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/ha-906000/disk.qcow2
	I0513 17:25:04.606466   36031 main.go:141] libmachine: STDOUT: 
	I0513 17:25:04.606544   36031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:25:04.606627   36031 fix.go:56] duration metric: took 19.047542ms for fixHost
	I0513 17:25:04.606648   36031 start.go:83] releasing machines lock for "ha-906000", held for 19.194083ms
	W0513 17:25:04.606878   36031 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-906000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-906000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:25:04.614746   36031 out.go:177] 
	W0513 17:25:04.618895   36031 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:25:04.618949   36031 out.go:239] * 
	* 
	W0513 17:25:04.621771   36031 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:25:04.628794   36031 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-906000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (64.408792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-906000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-906000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-906000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"Disable
DriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-906000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":tr
ue,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseI
nterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (28.917834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-906000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-906000 --control-plane -v=7 --alsologtostderr: exit status 83 (37.706167ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-906000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-906000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:25:04.838215   36047 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:25:04.838391   36047 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:25:04.838394   36047 out.go:304] Setting ErrFile to fd 2...
	I0513 17:25:04.838396   36047 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:25:04.838534   36047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:25:04.838782   36047 mustload.go:65] Loading cluster: ha-906000
	I0513 17:25:04.838969   36047 config.go:182] Loaded profile config "ha-906000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:25:04.840778   36047 out.go:177] * The control-plane node ha-906000 host is not running: state=Stopped
	I0513 17:25:04.844828   36047 out.go:177]   To start a cluster, run: "minikube start -p ha-906000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-906000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (29.019583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-906000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-906000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-906000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":f
alse,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-906000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":tr
ue}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":600000
00000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-906000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-906000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-906000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDri
verMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-906000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,
\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInte
rval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-906000 -n ha-906000: exit status 7 (28.97625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-906000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-234000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-234000 --driver=qemu2 : exit status 80 (10.063310958s)

                                                
                                                
-- stdout --
	* [image-234000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-234000" primary control-plane node in "image-234000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-234000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-234000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-234000 -n image-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-234000 -n image-234000: exit status 7 (67.395958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-234000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.13s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-388000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-388000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.744615875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b68c6dce-6633-465e-b4c3-28482718fd5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-388000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"80a81889-8b65-444a-929d-1dce4f302cb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18872"}}
	{"specversion":"1.0","id":"d0ae7591-588c-4192-9bb6-98c477e49722","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig"}}
	{"specversion":"1.0","id":"439c4a44-7cef-4e28-b701-46b1a54ab442","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"12fb6fca-9937-45c5-ae19-f5de9b49053c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d12095a4-eef7-4713-a883-90d38c2c9e49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube"}}
	{"specversion":"1.0","id":"e68dbace-6154-4034-a59c-9055da0a995b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"54c10838-e30b-4d5a-b599-cf28a1952c27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0e6f9fc-eee4-4913-b685-054a93f97507","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"9e80fc23-d25f-4ed5-a453-648ba8c61e29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-388000\" primary control-plane node in \"json-output-388000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f69590ae-f7ea-4bee-b828-08883d0b3e16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"f96ec03a-2f21-46a4-b805-ed95bc5d6499","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-388000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"d19dc78f-f76b-4f54-8183-52a47b6ca6fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"12ac1fd0-b84e-427a-a44a-d15d12b3edd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"44c96c03-ec1c-48bc-9437-841e3891209c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-388000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"b5f30e38-ef7a-41b7-b444-786675f278e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"f741fb45-cbec-4855-91ac-3f3c8678c752","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-388000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-388000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-388000 --output=json --user=testUser: exit status 83 (78.409583ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"98d7f859-5cfd-424a-a23b-a6bd8b99e524","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-388000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"62e0a10c-3a7c-4caf-9427-72cf2415d4b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-388000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-388000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-388000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-388000 --output=json --user=testUser: exit status 83 (45.3965ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-388000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-388000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-388000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-388000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-679000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-679000 --driver=qemu2 : exit status 80 (9.929382375s)

                                                
                                                
-- stdout --
	* [first-679000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-679000" primary control-plane node in "first-679000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-679000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-679000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-679000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-13 17:25:37.408343 -0700 PDT m=+443.358272960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-681000 -n second-681000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-681000 -n second-681000: exit status 85 (77.017625ms)

                                                
                                                
-- stdout --
	* Profile "second-681000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-681000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-681000" host is not running, skipping log retrieval (state="* Profile \"second-681000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-681000\"")
helpers_test.go:175: Cleaning up "second-681000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-681000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-13 17:25:37.710869 -0700 PDT m=+443.661089043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-679000 -n first-679000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-679000 -n first-679000: exit status 7 (29.166542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-679000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-679000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-679000
--- FAIL: TestMinikubeProfile (10.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-526000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-526000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.022652666s)

                                                
                                                
-- stdout --
	* [mount-start-1-526000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-526000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-526000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-526000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-526000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-526000 -n mount-start-1-526000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-526000 -n mount-start-1-526000: exit status 7 (67.065458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-526000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-126000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-126000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.805206875s)

                                                
                                                
-- stdout --
	* [multinode-126000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-126000" primary control-plane node in "multinode-126000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-126000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:25:48.274126   36205 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:25:48.274255   36205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:25:48.274258   36205 out.go:304] Setting ErrFile to fd 2...
	I0513 17:25:48.274260   36205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:25:48.274376   36205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:25:48.275439   36205 out.go:298] Setting JSON to false
	I0513 17:25:48.291529   36205 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26718,"bootTime":1715619630,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:25:48.291586   36205 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:25:48.297065   36205 out.go:177] * [multinode-126000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:25:48.305236   36205 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:25:48.305321   36205 notify.go:220] Checking for updates...
	I0513 17:25:48.309204   36205 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:25:48.312171   36205 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:25:48.315242   36205 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:25:48.318192   36205 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:25:48.321236   36205 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:25:48.324395   36205 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:25:48.329102   36205 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:25:48.336149   36205 start.go:297] selected driver: qemu2
	I0513 17:25:48.336157   36205 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:25:48.336165   36205 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:25:48.338439   36205 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:25:48.341116   36205 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:25:48.344284   36205 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:25:48.344329   36205 cni.go:84] Creating CNI manager for ""
	I0513 17:25:48.344336   36205 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0513 17:25:48.344340   36205 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0513 17:25:48.344389   36205 start.go:340] cluster config:
	{Name:multinode-126000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPa
th:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:25:48.348801   36205 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:25:48.357212   36205 out.go:177] * Starting "multinode-126000" primary control-plane node in "multinode-126000" cluster
	I0513 17:25:48.361170   36205 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:25:48.361186   36205 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:25:48.361197   36205 cache.go:56] Caching tarball of preloaded images
	I0513 17:25:48.361254   36205 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:25:48.361260   36205 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:25:48.361469   36205 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/multinode-126000/config.json ...
	I0513 17:25:48.361480   36205 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/multinode-126000/config.json: {Name:mk35951df59cebac7b8e38f7bb16d37676bd5eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:25:48.361691   36205 start.go:360] acquireMachinesLock for multinode-126000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:25:48.361724   36205 start.go:364] duration metric: took 27.875µs to acquireMachinesLock for "multinode-126000"
	I0513 17:25:48.361737   36205 start.go:93] Provisioning new machine with config: &{Name:multinode-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-126000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:25:48.361769   36205 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:25:48.369155   36205 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:25:48.386258   36205 start.go:159] libmachine.API.Create for "multinode-126000" (driver="qemu2")
	I0513 17:25:48.386289   36205 client.go:168] LocalClient.Create starting
	I0513 17:25:48.386350   36205 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:25:48.386381   36205 main.go:141] libmachine: Decoding PEM data...
	I0513 17:25:48.386392   36205 main.go:141] libmachine: Parsing certificate...
	I0513 17:25:48.386429   36205 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:25:48.386452   36205 main.go:141] libmachine: Decoding PEM data...
	I0513 17:25:48.386460   36205 main.go:141] libmachine: Parsing certificate...
	I0513 17:25:48.386827   36205 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:25:48.524257   36205 main.go:141] libmachine: Creating SSH key...
	I0513 17:25:48.664099   36205 main.go:141] libmachine: Creating Disk image...
	I0513 17:25:48.664105   36205 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:25:48.664295   36205 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2
	I0513 17:25:48.676910   36205 main.go:141] libmachine: STDOUT: 
	I0513 17:25:48.676928   36205 main.go:141] libmachine: STDERR: 
	I0513 17:25:48.676986   36205 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2 +20000M
	I0513 17:25:48.687666   36205 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:25:48.687683   36205 main.go:141] libmachine: STDERR: 
	I0513 17:25:48.687698   36205 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2
	I0513 17:25:48.687702   36205 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:25:48.687740   36205 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:da:08:36:9f:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2
	I0513 17:25:48.689407   36205 main.go:141] libmachine: STDOUT: 
	I0513 17:25:48.689421   36205 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:25:48.689437   36205 client.go:171] duration metric: took 303.289292ms to LocalClient.Create
	I0513 17:25:50.690700   36205 start.go:128] duration metric: took 2.329974375s to createHost
	I0513 17:25:50.690812   36205 start.go:83] releasing machines lock for "multinode-126000", held for 2.330102125s
	W0513 17:25:50.690875   36205 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:25:50.697126   36205 out.go:177] * Deleting "multinode-126000" in qemu2 ...
	W0513 17:25:50.724154   36205 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:25:50.724180   36205 start.go:728] Will try again in 5 seconds ...
	I0513 17:25:55.724607   36205 start.go:360] acquireMachinesLock for multinode-126000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:25:55.724971   36205 start.go:364] duration metric: took 300.917µs to acquireMachinesLock for "multinode-126000"
	I0513 17:25:55.725072   36205 start.go:93] Provisioning new machine with config: &{Name:multinode-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-126000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:25:55.725322   36205 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:25:55.737829   36205 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:25:55.783661   36205 start.go:159] libmachine.API.Create for "multinode-126000" (driver="qemu2")
	I0513 17:25:55.783714   36205 client.go:168] LocalClient.Create starting
	I0513 17:25:55.783882   36205 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:25:55.783957   36205 main.go:141] libmachine: Decoding PEM data...
	I0513 17:25:55.783974   36205 main.go:141] libmachine: Parsing certificate...
	I0513 17:25:55.784053   36205 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:25:55.784105   36205 main.go:141] libmachine: Decoding PEM data...
	I0513 17:25:55.784120   36205 main.go:141] libmachine: Parsing certificate...
	I0513 17:25:55.784683   36205 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:25:55.933841   36205 main.go:141] libmachine: Creating SSH key...
	I0513 17:25:55.969058   36205 main.go:141] libmachine: Creating Disk image...
	I0513 17:25:55.969063   36205 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:25:55.969260   36205 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2
	I0513 17:25:55.981889   36205 main.go:141] libmachine: STDOUT: 
	I0513 17:25:55.981907   36205 main.go:141] libmachine: STDERR: 
	I0513 17:25:55.981967   36205 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2 +20000M
	I0513 17:25:55.993043   36205 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:25:55.993060   36205 main.go:141] libmachine: STDERR: 
	I0513 17:25:55.993070   36205 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2
	I0513 17:25:55.993075   36205 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:25:55.993113   36205 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:fb:1e:a1:65:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2
	I0513 17:25:55.994732   36205 main.go:141] libmachine: STDOUT: 
	I0513 17:25:55.994750   36205 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:25:55.994762   36205 client.go:171] duration metric: took 211.107958ms to LocalClient.Create
	I0513 17:25:57.996365   36205 start.go:128] duration metric: took 2.271650625s to createHost
	I0513 17:25:57.996500   36205 start.go:83] releasing machines lock for "multinode-126000", held for 2.272109292s
	W0513 17:25:57.996798   36205 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-126000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-126000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:25:58.014373   36205 out.go:177] 
	W0513 17:25:58.017451   36205 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:25:58.017474   36205 out.go:239] * 
	* 
	W0513 17:25:58.020244   36205 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:25:58.034441   36205 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-126000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (69.024708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (101.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.340875ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-126000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- rollout status deployment/busybox: exit status 1 (56.266792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.065583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.389375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.649166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.649ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.036541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.013ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.105834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.278417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.601125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.763666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.84125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.251ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.659125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.712541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.402583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (29.232458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (101.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-126000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.307834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (29.135292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-126000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-126000 -v 3 --alsologtostderr: exit status 83 (39.919375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-126000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-126000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:27:39.464373   36298 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:27:39.464535   36298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:39.464538   36298 out.go:304] Setting ErrFile to fd 2...
	I0513 17:27:39.464540   36298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:39.464673   36298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:27:39.464919   36298 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:27:39.465106   36298 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:27:39.468672   36298 out.go:177] * The control-plane node multinode-126000 host is not running: state=Stopped
	I0513 17:27:39.471578   36298 out.go:177]   To start a cluster, run: "minikube start -p multinode-126000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-126000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (29.150292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-126000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-126000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.425459ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-126000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-126000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-126000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (29.531125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-126000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-126000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-126000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,
\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"multinode-126000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"C
ontrolPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\
"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (28.930833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status --output json --alsologtostderr: exit status 7 (29.181625ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-126000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:27:39.687783   36311 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:27:39.687948   36311 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:39.687952   36311 out.go:304] Setting ErrFile to fd 2...
	I0513 17:27:39.687954   36311 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:39.688080   36311 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:27:39.688205   36311 out.go:298] Setting JSON to true
	I0513 17:27:39.688216   36311 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:27:39.688277   36311 notify.go:220] Checking for updates...
	I0513 17:27:39.688414   36311 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:27:39.688419   36311 status.go:255] checking status of multinode-126000 ...
	I0513 17:27:39.688639   36311 status.go:330] multinode-126000 host status = "Stopped" (err=<nil>)
	I0513 17:27:39.688642   36311 status.go:343] host is not running, skipping remaining checks
	I0513 17:27:39.688644   36311 status.go:257] multinode-126000 status: &{Name:multinode-126000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-126000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (29.046167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 node stop m03: exit status 85 (47.7735ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-126000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status: exit status 7 (29.796ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status --alsologtostderr: exit status 7 (28.768917ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:27:39.823964   36319 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:27:39.824117   36319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:39.824120   36319 out.go:304] Setting ErrFile to fd 2...
	I0513 17:27:39.824124   36319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:39.824242   36319 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:27:39.824356   36319 out.go:298] Setting JSON to false
	I0513 17:27:39.824366   36319 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:27:39.824413   36319 notify.go:220] Checking for updates...
	I0513 17:27:39.824571   36319 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:27:39.824576   36319 status.go:255] checking status of multinode-126000 ...
	I0513 17:27:39.824805   36319 status.go:330] multinode-126000 host status = "Stopped" (err=<nil>)
	I0513 17:27:39.824809   36319 status.go:343] host is not running, skipping remaining checks
	I0513 17:27:39.824811   36319 status.go:257] multinode-126000 status: &{Name:multinode-126000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-126000 status --alsologtostderr": multinode-126000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (29.055666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (44.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.865041ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:27:39.882606   36323 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:27:39.883084   36323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:39.883088   36323 out.go:304] Setting ErrFile to fd 2...
	I0513 17:27:39.883090   36323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:39.883305   36323 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:27:39.883526   36323 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:27:39.883706   36323 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:27:39.888081   36323 out.go:177] 
	W0513 17:27:39.891077   36323 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0513 17:27:39.891081   36323 out.go:239] * 
	* 
	W0513 17:27:39.893658   36323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:27:39.897015   36323 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0513 17:27:39.882606   36323 out.go:291] Setting OutFile to fd 1 ...
I0513 17:27:39.883084   36323 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:27:39.883088   36323 out.go:304] Setting ErrFile to fd 2...
I0513 17:27:39.883090   36323 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0513 17:27:39.883305   36323 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
I0513 17:27:39.883526   36323 mustload.go:65] Loading cluster: multinode-126000
I0513 17:27:39.883706   36323 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0513 17:27:39.888081   36323 out.go:177] 
W0513 17:27:39.891077   36323 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0513 17:27:39.891081   36323 out.go:239] * 
* 
W0513 17:27:39.893658   36323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0513 17:27:39.897015   36323 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-126000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr: exit status 7 (29.309ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:27:39.929719   36325 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:27:39.929888   36325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:39.929891   36325 out.go:304] Setting ErrFile to fd 2...
	I0513 17:27:39.929893   36325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:39.930019   36325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:27:39.930144   36325 out.go:298] Setting JSON to false
	I0513 17:27:39.930155   36325 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:27:39.930205   36325 notify.go:220] Checking for updates...
	I0513 17:27:39.930342   36325 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:27:39.930347   36325 status.go:255] checking status of multinode-126000 ...
	I0513 17:27:39.930566   36325 status.go:330] multinode-126000 host status = "Stopped" (err=<nil>)
	I0513 17:27:39.930570   36325 status.go:343] host is not running, skipping remaining checks
	I0513 17:27:39.930572   36325 status.go:257] multinode-126000 status: &{Name:multinode-126000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr: exit status 7 (70.671083ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:27:40.704528   36327 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:27:40.704747   36327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:40.704751   36327 out.go:304] Setting ErrFile to fd 2...
	I0513 17:27:40.704754   36327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:40.704911   36327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:27:40.705063   36327 out.go:298] Setting JSON to false
	I0513 17:27:40.705081   36327 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:27:40.705117   36327 notify.go:220] Checking for updates...
	I0513 17:27:40.705348   36327 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:27:40.705354   36327 status.go:255] checking status of multinode-126000 ...
	I0513 17:27:40.705649   36327 status.go:330] multinode-126000 host status = "Stopped" (err=<nil>)
	I0513 17:27:40.705654   36327 status.go:343] host is not running, skipping remaining checks
	I0513 17:27:40.705656   36327 status.go:257] multinode-126000 status: &{Name:multinode-126000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr: exit status 7 (73.3585ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:27:42.522210   36329 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:27:42.522442   36329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:42.522446   36329 out.go:304] Setting ErrFile to fd 2...
	I0513 17:27:42.522449   36329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:42.522600   36329 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:27:42.522779   36329 out.go:298] Setting JSON to false
	I0513 17:27:42.522793   36329 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:27:42.522835   36329 notify.go:220] Checking for updates...
	I0513 17:27:42.523077   36329 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:27:42.523085   36329 status.go:255] checking status of multinode-126000 ...
	I0513 17:27:42.523365   36329 status.go:330] multinode-126000 host status = "Stopped" (err=<nil>)
	I0513 17:27:42.523369   36329 status.go:343] host is not running, skipping remaining checks
	I0513 17:27:42.523373   36329 status.go:257] multinode-126000 status: &{Name:multinode-126000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr: exit status 7 (73.87425ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:27:45.037781   36335 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:27:45.037983   36335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:45.037988   36335 out.go:304] Setting ErrFile to fd 2...
	I0513 17:27:45.037991   36335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:45.038165   36335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:27:45.038344   36335 out.go:298] Setting JSON to false
	I0513 17:27:45.038360   36335 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:27:45.038406   36335 notify.go:220] Checking for updates...
	I0513 17:27:45.038645   36335 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:27:45.038652   36335 status.go:255] checking status of multinode-126000 ...
	I0513 17:27:45.038984   36335 status.go:330] multinode-126000 host status = "Stopped" (err=<nil>)
	I0513 17:27:45.038989   36335 status.go:343] host is not running, skipping remaining checks
	I0513 17:27:45.038993   36335 status.go:257] multinode-126000 status: &{Name:multinode-126000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr: exit status 7 (71.926584ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:27:47.868378   36340 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:27:47.868704   36340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:47.868711   36340 out.go:304] Setting ErrFile to fd 2...
	I0513 17:27:47.868715   36340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:47.868886   36340 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:27:47.869082   36340 out.go:298] Setting JSON to false
	I0513 17:27:47.869099   36340 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:27:47.869142   36340 notify.go:220] Checking for updates...
	I0513 17:27:47.869397   36340 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:27:47.869407   36340 status.go:255] checking status of multinode-126000 ...
	I0513 17:27:47.869677   36340 status.go:330] multinode-126000 host status = "Stopped" (err=<nil>)
	I0513 17:27:47.869682   36340 status.go:343] host is not running, skipping remaining checks
	I0513 17:27:47.869685   36340 status.go:257] multinode-126000 status: &{Name:multinode-126000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr: exit status 7 (71.725208ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:27:55.090182   36342 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:27:55.090405   36342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:55.090410   36342 out.go:304] Setting ErrFile to fd 2...
	I0513 17:27:55.090413   36342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:27:55.090613   36342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:27:55.090775   36342 out.go:298] Setting JSON to false
	I0513 17:27:55.090789   36342 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:27:55.090832   36342 notify.go:220] Checking for updates...
	I0513 17:27:55.091068   36342 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:27:55.091074   36342 status.go:255] checking status of multinode-126000 ...
	I0513 17:27:55.091351   36342 status.go:330] multinode-126000 host status = "Stopped" (err=<nil>)
	I0513 17:27:55.091355   36342 status.go:343] host is not running, skipping remaining checks
	I0513 17:27:55.091358   36342 status.go:257] multinode-126000 status: &{Name:multinode-126000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr: exit status 7 (73.15325ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:28:03.359808   36346 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:28:03.360040   36346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:03.360048   36346 out.go:304] Setting ErrFile to fd 2...
	I0513 17:28:03.360051   36346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:03.360222   36346 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:28:03.360403   36346 out.go:298] Setting JSON to false
	I0513 17:28:03.360419   36346 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:28:03.360460   36346 notify.go:220] Checking for updates...
	I0513 17:28:03.360692   36346 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:28:03.360700   36346 status.go:255] checking status of multinode-126000 ...
	I0513 17:28:03.360993   36346 status.go:330] multinode-126000 host status = "Stopped" (err=<nil>)
	I0513 17:28:03.360998   36346 status.go:343] host is not running, skipping remaining checks
	I0513 17:28:03.361001   36346 status.go:257] multinode-126000 status: &{Name:multinode-126000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr: exit status 7 (73.603042ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:28:15.078023   36348 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:28:15.078318   36348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:15.078322   36348 out.go:304] Setting ErrFile to fd 2...
	I0513 17:28:15.078326   36348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:15.078495   36348 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:28:15.078665   36348 out.go:298] Setting JSON to false
	I0513 17:28:15.078680   36348 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:28:15.078713   36348 notify.go:220] Checking for updates...
	I0513 17:28:15.078970   36348 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:28:15.078983   36348 status.go:255] checking status of multinode-126000 ...
	I0513 17:28:15.079261   36348 status.go:330] multinode-126000 host status = "Stopped" (err=<nil>)
	I0513 17:28:15.079266   36348 status.go:343] host is not running, skipping remaining checks
	I0513 17:28:15.079269   36348 status.go:257] multinode-126000 status: &{Name:multinode-126000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr: exit status 7 (74.073625ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:28:24.605920   36350 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:28:24.606106   36350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:24.606110   36350 out.go:304] Setting ErrFile to fd 2...
	I0513 17:28:24.606113   36350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:24.606283   36350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:28:24.606435   36350 out.go:298] Setting JSON to false
	I0513 17:28:24.606450   36350 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:28:24.606491   36350 notify.go:220] Checking for updates...
	I0513 17:28:24.606700   36350 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:28:24.606708   36350 status.go:255] checking status of multinode-126000 ...
	I0513 17:28:24.606984   36350 status.go:330] multinode-126000 host status = "Stopped" (err=<nil>)
	I0513 17:28:24.606989   36350 status.go:343] host is not running, skipping remaining checks
	I0513 17:28:24.606992   36350 status.go:257] multinode-126000 status: &{Name:multinode-126000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-126000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (32.486083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (44.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-126000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-126000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-126000: (3.288733417s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-126000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-126000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.222641792s)

                                                
                                                
-- stdout --
	* [multinode-126000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-126000" primary control-plane node in "multinode-126000" cluster
	* Restarting existing qemu2 VM for "multinode-126000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-126000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:28:28.017819   36376 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:28:28.017999   36376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:28.018003   36376 out.go:304] Setting ErrFile to fd 2...
	I0513 17:28:28.018006   36376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:28.018184   36376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:28:28.019522   36376 out.go:298] Setting JSON to false
	I0513 17:28:28.039089   36376 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26878,"bootTime":1715619630,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:28:28.039157   36376 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:28:28.044550   36376 out.go:177] * [multinode-126000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:28:28.056419   36376 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:28:28.052501   36376 notify.go:220] Checking for updates...
	I0513 17:28:28.062452   36376 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:28:28.065495   36376 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:28:28.068448   36376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:28:28.071479   36376 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:28:28.072849   36376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:28:28.075845   36376 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:28:28.075906   36376 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:28:28.080481   36376 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:28:28.085438   36376 start.go:297] selected driver: qemu2
	I0513 17:28:28.085445   36376 start.go:901] validating driver "qemu2" against &{Name:multinode-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-126000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:28:28.085508   36376 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:28:28.087902   36376 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:28:28.087936   36376 cni.go:84] Creating CNI manager for ""
	I0513 17:28:28.087941   36376 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0513 17:28:28.088004   36376 start.go:340] cluster config:
	{Name:multinode-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:28:28.092768   36376 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:28:28.100419   36376 out.go:177] * Starting "multinode-126000" primary control-plane node in "multinode-126000" cluster
	I0513 17:28:28.104502   36376 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:28:28.104520   36376 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:28:28.104532   36376 cache.go:56] Caching tarball of preloaded images
	I0513 17:28:28.104596   36376 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:28:28.104603   36376 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:28:28.104671   36376 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/multinode-126000/config.json ...
	I0513 17:28:28.105121   36376 start.go:360] acquireMachinesLock for multinode-126000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:28:28.105160   36376 start.go:364] duration metric: took 31.666µs to acquireMachinesLock for "multinode-126000"
	I0513 17:28:28.105174   36376 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:28:28.105179   36376 fix.go:54] fixHost starting: 
	I0513 17:28:28.105326   36376 fix.go:112] recreateIfNeeded on multinode-126000: state=Stopped err=<nil>
	W0513 17:28:28.105338   36376 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:28:28.109442   36376 out.go:177] * Restarting existing qemu2 VM for "multinode-126000" ...
	I0513 17:28:28.117549   36376 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:fb:1e:a1:65:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2
	I0513 17:28:28.119836   36376 main.go:141] libmachine: STDOUT: 
	I0513 17:28:28.119859   36376 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:28:28.119889   36376 fix.go:56] duration metric: took 14.709541ms for fixHost
	I0513 17:28:28.119894   36376 start.go:83] releasing machines lock for "multinode-126000", held for 14.728791ms
	W0513 17:28:28.119901   36376 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:28:28.119943   36376 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:28:28.119948   36376 start.go:728] Will try again in 5 seconds ...
	I0513 17:28:33.122057   36376 start.go:360] acquireMachinesLock for multinode-126000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:28:33.122476   36376 start.go:364] duration metric: took 310.792µs to acquireMachinesLock for "multinode-126000"
	I0513 17:28:33.122624   36376 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:28:33.122645   36376 fix.go:54] fixHost starting: 
	I0513 17:28:33.123460   36376 fix.go:112] recreateIfNeeded on multinode-126000: state=Stopped err=<nil>
	W0513 17:28:33.123487   36376 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:28:33.132978   36376 out.go:177] * Restarting existing qemu2 VM for "multinode-126000" ...
	I0513 17:28:33.137247   36376 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:fb:1e:a1:65:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2
	I0513 17:28:33.146754   36376 main.go:141] libmachine: STDOUT: 
	I0513 17:28:33.146820   36376 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:28:33.146903   36376 fix.go:56] duration metric: took 24.258208ms for fixHost
	I0513 17:28:33.146917   36376 start.go:83] releasing machines lock for "multinode-126000", held for 24.420708ms
	W0513 17:28:33.147083   36376 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-126000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-126000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:28:33.154033   36376 out.go:177] 
	W0513 17:28:33.157979   36376 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:28:33.158018   36376 out.go:239] * 
	* 
	W0513 17:28:33.160479   36376 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:28:33.168035   36376 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-126000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-126000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (32.396667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 node delete m03: exit status 83 (41.920666ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-126000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-126000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-126000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status --alsologtostderr: exit status 7 (29.546666ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:28:33.352696   36393 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:28:33.352849   36393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:33.352852   36393 out.go:304] Setting ErrFile to fd 2...
	I0513 17:28:33.352855   36393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:33.352979   36393 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:28:33.353101   36393 out.go:298] Setting JSON to false
	I0513 17:28:33.353111   36393 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:28:33.353175   36393 notify.go:220] Checking for updates...
	I0513 17:28:33.353297   36393 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:28:33.353302   36393 status.go:255] checking status of multinode-126000 ...
	I0513 17:28:33.353521   36393 status.go:330] multinode-126000 host status = "Stopped" (err=<nil>)
	I0513 17:28:33.353525   36393 status.go:343] host is not running, skipping remaining checks
	I0513 17:28:33.353527   36393 status.go:257] multinode-126000 status: &{Name:multinode-126000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-126000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (29.2525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-126000 stop: (3.36611675s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status: exit status 7 (63.400917ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-126000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-126000 status --alsologtostderr: exit status 7 (31.121ms)

                                                
                                                
-- stdout --
	multinode-126000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:28:36.843176   36417 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:28:36.843350   36417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:36.843353   36417 out.go:304] Setting ErrFile to fd 2...
	I0513 17:28:36.843355   36417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:36.843480   36417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:28:36.843598   36417 out.go:298] Setting JSON to false
	I0513 17:28:36.843609   36417 mustload.go:65] Loading cluster: multinode-126000
	I0513 17:28:36.843672   36417 notify.go:220] Checking for updates...
	I0513 17:28:36.843803   36417 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:28:36.843807   36417 status.go:255] checking status of multinode-126000 ...
	I0513 17:28:36.843998   36417 status.go:330] multinode-126000 host status = "Stopped" (err=<nil>)
	I0513 17:28:36.844002   36417 status.go:343] host is not running, skipping remaining checks
	I0513 17:28:36.844004   36417 status.go:257] multinode-126000 status: &{Name:multinode-126000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-126000 status --alsologtostderr": multinode-126000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-126000 status --alsologtostderr": multinode-126000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (29.427959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-126000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-126000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.17707025s)

                                                
                                                
-- stdout --
	* [multinode-126000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-126000" primary control-plane node in "multinode-126000" cluster
	* Restarting existing qemu2 VM for "multinode-126000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-126000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:28:36.901441   36421 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:28:36.901574   36421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:36.901577   36421 out.go:304] Setting ErrFile to fd 2...
	I0513 17:28:36.901580   36421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:28:36.901706   36421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:28:36.902744   36421 out.go:298] Setting JSON to false
	I0513 17:28:36.918851   36421 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26886,"bootTime":1715619630,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:28:36.918915   36421 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:28:36.923610   36421 out.go:177] * [multinode-126000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:28:36.926585   36421 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:28:36.930484   36421 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:28:36.926637   36421 notify.go:220] Checking for updates...
	I0513 17:28:36.934579   36421 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:28:36.935730   36421 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:28:36.938513   36421 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:28:36.941517   36421 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:28:36.944778   36421 config.go:182] Loaded profile config "multinode-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:28:36.945048   36421 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:28:36.949472   36421 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:28:36.956488   36421 start.go:297] selected driver: qemu2
	I0513 17:28:36.956503   36421 start.go:901] validating driver "qemu2" against &{Name:multinode-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-126000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:28:36.956573   36421 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:28:36.958715   36421 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:28:36.958743   36421 cni.go:84] Creating CNI manager for ""
	I0513 17:28:36.958747   36421 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0513 17:28:36.958802   36421 start.go:340] cluster config:
	{Name:multinode-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:28:36.962943   36421 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:28:36.971508   36421 out.go:177] * Starting "multinode-126000" primary control-plane node in "multinode-126000" cluster
	I0513 17:28:36.975538   36421 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:28:36.975550   36421 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:28:36.975557   36421 cache.go:56] Caching tarball of preloaded images
	I0513 17:28:36.975597   36421 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:28:36.975602   36421 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:28:36.975649   36421 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/multinode-126000/config.json ...
	I0513 17:28:36.976060   36421 start.go:360] acquireMachinesLock for multinode-126000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:28:36.976086   36421 start.go:364] duration metric: took 19.708µs to acquireMachinesLock for "multinode-126000"
	I0513 17:28:36.976095   36421 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:28:36.976098   36421 fix.go:54] fixHost starting: 
	I0513 17:28:36.976205   36421 fix.go:112] recreateIfNeeded on multinode-126000: state=Stopped err=<nil>
	W0513 17:28:36.976212   36421 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:28:36.980552   36421 out.go:177] * Restarting existing qemu2 VM for "multinode-126000" ...
	I0513 17:28:36.988510   36421 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:fb:1e:a1:65:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2
	I0513 17:28:36.990390   36421 main.go:141] libmachine: STDOUT: 
	I0513 17:28:36.990417   36421 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:28:36.990439   36421 fix.go:56] duration metric: took 14.340375ms for fixHost
	I0513 17:28:36.990443   36421 start.go:83] releasing machines lock for "multinode-126000", held for 14.354ms
	W0513 17:28:36.990449   36421 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:28:36.990475   36421 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:28:36.990479   36421 start.go:728] Will try again in 5 seconds ...
	I0513 17:28:41.992580   36421 start.go:360] acquireMachinesLock for multinode-126000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:28:41.993020   36421 start.go:364] duration metric: took 341.542µs to acquireMachinesLock for "multinode-126000"
	I0513 17:28:41.993136   36421 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:28:41.993155   36421 fix.go:54] fixHost starting: 
	I0513 17:28:41.993897   36421 fix.go:112] recreateIfNeeded on multinode-126000: state=Stopped err=<nil>
	W0513 17:28:41.993923   36421 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:28:41.999312   36421 out.go:177] * Restarting existing qemu2 VM for "multinode-126000" ...
	I0513 17:28:42.007495   36421 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:fb:1e:a1:65:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/multinode-126000/disk.qcow2
	I0513 17:28:42.016166   36421 main.go:141] libmachine: STDOUT: 
	I0513 17:28:42.016222   36421 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:28:42.016282   36421 fix.go:56] duration metric: took 23.126041ms for fixHost
	I0513 17:28:42.016298   36421 start.go:83] releasing machines lock for "multinode-126000", held for 23.250416ms
	W0513 17:28:42.016430   36421 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-126000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-126000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:28:42.023307   36421 out.go:177] 
	W0513 17:28:42.027281   36421 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:28:42.027304   36421 out.go:239] * 
	* 
	W0513 17:28:42.029890   36421 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:28:42.038293   36421 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-126000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (68.307833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-126000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-126000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-126000-m01 --driver=qemu2 : exit status 80 (10.326293333s)

                                                
                                                
-- stdout --
	* [multinode-126000-m01] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-126000-m01" primary control-plane node in "multinode-126000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-126000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-126000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-126000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-126000-m02 --driver=qemu2 : exit status 80 (10.828057708s)

                                                
                                                
-- stdout --
	* [multinode-126000-m02] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-126000-m02" primary control-plane node in "multinode-126000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-126000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-126000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-126000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-126000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-126000: exit status 83 (81.579ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-126000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-126000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-126000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-126000 -n multinode-126000: exit status 7 (30.074666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-126000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (21.40s)

                                                
                                    
x
+
TestPreload (10.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-837000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-837000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.026890084s)

                                                
                                                
-- stdout --
	* [test-preload-837000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-837000" primary control-plane node in "test-preload-837000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-837000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:29:03.692715   36480 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:29:03.692919   36480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:29:03.692922   36480 out.go:304] Setting ErrFile to fd 2...
	I0513 17:29:03.692924   36480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:29:03.693061   36480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:29:03.694100   36480 out.go:298] Setting JSON to false
	I0513 17:29:03.710023   36480 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26913,"bootTime":1715619630,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:29:03.710091   36480 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:29:03.715809   36480 out.go:177] * [test-preload-837000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:29:03.723726   36480 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:29:03.723773   36480 notify.go:220] Checking for updates...
	I0513 17:29:03.728777   36480 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:29:03.731793   36480 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:29:03.734853   36480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:29:03.737751   36480 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:29:03.740842   36480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:29:03.744145   36480 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:29:03.744213   36480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:29:03.748807   36480 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:29:03.755661   36480 start.go:297] selected driver: qemu2
	I0513 17:29:03.755668   36480 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:29:03.755675   36480 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:29:03.757876   36480 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:29:03.760800   36480 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:29:03.763901   36480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:29:03.763925   36480 cni.go:84] Creating CNI manager for ""
	I0513 17:29:03.763939   36480 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:29:03.763947   36480 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:29:03.763981   36480 start.go:340] cluster config:
	{Name:test-preload-837000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-837000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Socket
VMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:29:03.768517   36480 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:29:03.775720   36480 out.go:177] * Starting "test-preload-837000" primary control-plane node in "test-preload-837000" cluster
	I0513 17:29:03.779609   36480 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0513 17:29:03.779672   36480 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/test-preload-837000/config.json ...
	I0513 17:29:03.779688   36480 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/test-preload-837000/config.json: {Name:mkd120e4d20dd87e6117f7b55a4c8481bc885a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:29:03.779688   36480 cache.go:107] acquiring lock: {Name:mk5d133a9e8b618e6273c3126afd1a3513319a9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:29:03.779695   36480 cache.go:107] acquiring lock: {Name:mkc1a4a47e9939554c42f76e03345cea14f1e6dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:29:03.779705   36480 cache.go:107] acquiring lock: {Name:mk7b5744e4aaa32f84789387f5a4ec84441f99e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:29:03.779723   36480 cache.go:107] acquiring lock: {Name:mk4b8d97d55a3496ec59f8d0a72102585e384518 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:29:03.779844   36480 cache.go:107] acquiring lock: {Name:mk0e25284a4ed97f4f86aefdfa3cf4e48e5d03cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:29:03.779870   36480 cache.go:107] acquiring lock: {Name:mk38091c4760ad22a764cfe5df8d999f627f9960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:29:03.779920   36480 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0513 17:29:03.779936   36480 start.go:360] acquireMachinesLock for test-preload-837000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:29:03.779940   36480 cache.go:107] acquiring lock: {Name:mk686632e7695815293f777f1464be8bf9a7f637 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:29:03.779690   36480 cache.go:107] acquiring lock: {Name:mk99c89ed781945e596c823e42c3bdc9cb80177a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:29:03.779984   36480 start.go:364] duration metric: took 38.708µs to acquireMachinesLock for "test-preload-837000"
	I0513 17:29:03.779970   36480 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0513 17:29:03.779999   36480 start.go:93] Provisioning new machine with config: &{Name:test-preload-837000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-8
37000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:29:03.780082   36480 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:29:03.780081   36480 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:29:03.780171   36480 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:29:03.780172   36480 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0513 17:29:03.780175   36480 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0513 17:29:03.784834   36480 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:29:03.780066   36480 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0513 17:29:03.780268   36480 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0513 17:29:03.791589   36480 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0513 17:29:03.791606   36480 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0513 17:29:03.791665   36480 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:29:03.791677   36480 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:29:03.791752   36480 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0513 17:29:03.795237   36480 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0513 17:29:03.795439   36480 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0513 17:29:03.795542   36480 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0513 17:29:03.802654   36480 start.go:159] libmachine.API.Create for "test-preload-837000" (driver="qemu2")
	I0513 17:29:03.802670   36480 client.go:168] LocalClient.Create starting
	I0513 17:29:03.802761   36480 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:29:03.802789   36480 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:03.802798   36480 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:03.802839   36480 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:29:03.802860   36480 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:03.802869   36480 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:03.803177   36480 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:29:04.161950   36480 main.go:141] libmachine: Creating SSH key...
	I0513 17:29:04.206930   36480 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0513 17:29:04.236215   36480 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0513 17:29:04.236234   36480 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0513 17:29:04.244395   36480 main.go:141] libmachine: Creating Disk image...
	I0513 17:29:04.244401   36480 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:29:04.244574   36480 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/disk.qcow2
	I0513 17:29:04.257186   36480 main.go:141] libmachine: STDOUT: 
	I0513 17:29:04.257209   36480 main.go:141] libmachine: STDERR: 
	I0513 17:29:04.257257   36480 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/disk.qcow2 +20000M
	I0513 17:29:04.266845   36480 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0513 17:29:04.268518   36480 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:29:04.268528   36480 main.go:141] libmachine: STDERR: 
	I0513 17:29:04.268537   36480 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/disk.qcow2
	I0513 17:29:04.268543   36480 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:29:04.268572   36480 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:cd:2c:45:34:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/disk.qcow2
	I0513 17:29:04.270434   36480 main.go:141] libmachine: STDOUT: 
	I0513 17:29:04.270461   36480 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:29:04.270477   36480 client.go:171] duration metric: took 467.813333ms to LocalClient.Create
	I0513 17:29:04.285314   36480 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0513 17:29:04.305459   36480 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0513 17:29:04.364454   36480 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0513 17:29:04.414814   36480 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0513 17:29:04.512221   36480 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0513 17:29:04.512304   36480 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0513 17:29:04.552487   36480 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0513 17:29:04.552545   36480 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 772.661708ms
	I0513 17:29:04.552589   36480 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0513 17:29:04.717653   36480 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0513 17:29:04.717720   36480 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 938.047833ms
	I0513 17:29:04.717747   36480 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0513 17:29:06.092506   36480 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0513 17:29:06.092569   36480 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.312741916s
	I0513 17:29:06.092594   36480 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0513 17:29:06.270640   36480 start.go:128] duration metric: took 2.49057475s to createHost
	I0513 17:29:06.270693   36480 start.go:83] releasing machines lock for "test-preload-837000", held for 2.490750083s
	W0513 17:29:06.270781   36480 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:29:06.283981   36480 out.go:177] * Deleting "test-preload-837000" in qemu2 ...
	W0513 17:29:06.312181   36480 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:29:06.312238   36480 start.go:728] Will try again in 5 seconds ...
	I0513 17:29:07.164319   36480 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0513 17:29:07.164366   36480 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.384705791s
	I0513 17:29:07.164392   36480 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0513 17:29:07.752965   36480 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0513 17:29:07.753015   36480 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 3.973408375s
	I0513 17:29:07.753041   36480 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0513 17:29:08.283111   36480 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0513 17:29:08.283163   36480 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.503542167s
	I0513 17:29:08.283186   36480 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0513 17:29:09.356994   36480 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0513 17:29:09.357040   36480 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.577455792s
	I0513 17:29:09.357067   36480 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0513 17:29:11.312346   36480 start.go:360] acquireMachinesLock for test-preload-837000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:29:11.312812   36480 start.go:364] duration metric: took 385.834µs to acquireMachinesLock for "test-preload-837000"
	I0513 17:29:11.312963   36480 start.go:93] Provisioning new machine with config: &{Name:test-preload-837000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-8
37000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:29:11.313213   36480 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:29:11.319838   36480 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:29:11.368323   36480 start.go:159] libmachine.API.Create for "test-preload-837000" (driver="qemu2")
	I0513 17:29:11.368380   36480 client.go:168] LocalClient.Create starting
	I0513 17:29:11.368489   36480 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:29:11.368555   36480 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:11.368576   36480 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:11.368650   36480 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:29:11.368694   36480 main.go:141] libmachine: Decoding PEM data...
	I0513 17:29:11.368709   36480 main.go:141] libmachine: Parsing certificate...
	I0513 17:29:11.369185   36480 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:29:11.530986   36480 main.go:141] libmachine: Creating SSH key...
	I0513 17:29:11.612853   36480 main.go:141] libmachine: Creating Disk image...
	I0513 17:29:11.612858   36480 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:29:11.613034   36480 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/disk.qcow2
	I0513 17:29:11.616683   36480 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0513 17:29:11.616707   36480 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 7.8370285s
	I0513 17:29:11.616715   36480 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0513 17:29:11.616731   36480 cache.go:87] Successfully saved all images to host disk.
	I0513 17:29:11.625692   36480 main.go:141] libmachine: STDOUT: 
	I0513 17:29:11.625710   36480 main.go:141] libmachine: STDERR: 
	I0513 17:29:11.625773   36480 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/disk.qcow2 +20000M
	I0513 17:29:11.636783   36480 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:29:11.636805   36480 main.go:141] libmachine: STDERR: 
	I0513 17:29:11.636824   36480 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/disk.qcow2
	I0513 17:29:11.636835   36480 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:29:11.636879   36480 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:6a:a8:61:ec:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/test-preload-837000/disk.qcow2
	I0513 17:29:11.638562   36480 main.go:141] libmachine: STDOUT: 
	I0513 17:29:11.638583   36480 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:29:11.638600   36480 client.go:171] duration metric: took 270.218959ms to LocalClient.Create
	I0513 17:29:13.640928   36480 start.go:128] duration metric: took 2.327695083s to createHost
	I0513 17:29:13.641001   36480 start.go:83] releasing machines lock for "test-preload-837000", held for 2.328209166s
	W0513 17:29:13.641486   36480 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-837000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-837000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:29:13.656058   36480 out.go:177] 
	W0513 17:29:13.661182   36480 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:29:13.661216   36480 out.go:239] * 
	* 
	W0513 17:29:13.663956   36480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:29:13.676937   36480 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-837000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-05-13 17:29:13.694465 -0700 PDT m=+659.663470793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-837000 -n test-preload-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-837000 -n test-preload-837000: exit status 7 (65.907042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-837000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-837000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-837000
--- FAIL: TestPreload (10.19s)

                                                
                                    
x
+
TestScheduledStopUnix (10.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-151000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-151000 --memory=2048 --driver=qemu2 : exit status 80 (10.011656542s)

                                                
                                                
-- stdout --
	* [scheduled-stop-151000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-151000" primary control-plane node in "scheduled-stop-151000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-151000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-151000" primary control-plane node in "scheduled-stop-151000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-05-13 17:29:23.869373 -0700 PDT m=+669.838582251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-151000 -n scheduled-stop-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-151000 -n scheduled-stop-151000: exit status 7 (67.088416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-151000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-151000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-151000
--- FAIL: TestScheduledStopUnix (10.18s)

                                                
                                    
x
+
TestSkaffold (12.27s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1191899215 version
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-430000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-430000 --memory=2600 --driver=qemu2 : exit status 80 (10.000360625s)

                                                
                                                
-- stdout --
	* [skaffold-430000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-430000" primary control-plane node in "skaffold-430000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-430000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-430000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-430000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-430000" primary control-plane node in "skaffold-430000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-430000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-430000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-05-13 17:29:36.141987 -0700 PDT m=+682.111441751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-430000 -n skaffold-430000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-430000 -n skaffold-430000: exit status 7 (61.963167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-430000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-430000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-430000
--- FAIL: TestSkaffold (12.27s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (615.68s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3344372777 start -p running-upgrade-056000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3344372777 start -p running-upgrade-056000 --memory=2200 --vm-driver=qemu2 : (56.174173292s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-056000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-056000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m44.350977292s)

                                                
                                                
-- stdout --
	* [running-upgrade-056000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-056000" primary control-plane node in "running-upgrade-056000" cluster
	* Updating the running qemu2 "running-upgrade-056000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:31:16.485561   36897 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:31:16.485679   36897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:31:16.485682   36897 out.go:304] Setting ErrFile to fd 2...
	I0513 17:31:16.485685   36897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:31:16.485814   36897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:31:16.486722   36897 out.go:298] Setting JSON to false
	I0513 17:31:16.502848   36897 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27046,"bootTime":1715619630,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:31:16.502910   36897 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:31:16.508407   36897 out.go:177] * [running-upgrade-056000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:31:16.519248   36897 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:31:16.516417   36897 notify.go:220] Checking for updates...
	I0513 17:31:16.527294   36897 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:31:16.530341   36897 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:31:16.533288   36897 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:31:16.536344   36897 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:31:16.543298   36897 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:31:16.547411   36897 config.go:182] Loaded profile config "running-upgrade-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:31:16.550299   36897 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0513 17:31:16.553333   36897 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:31:16.557111   36897 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:31:16.564360   36897 start.go:297] selected driver: qemu2
	I0513 17:31:16.564367   36897 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56125 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-056000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0513 17:31:16.564423   36897 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:31:16.566588   36897 cni.go:84] Creating CNI manager for ""
	I0513 17:31:16.566606   36897 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:31:16.566633   36897 start.go:340] cluster config:
	{Name:running-upgrade-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56125 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-056000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0513 17:31:16.566686   36897 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:31:16.574261   36897 out.go:177] * Starting "running-upgrade-056000" primary control-plane node in "running-upgrade-056000" cluster
	I0513 17:31:16.577422   36897 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0513 17:31:16.577451   36897 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0513 17:31:16.577462   36897 cache.go:56] Caching tarball of preloaded images
	I0513 17:31:16.577552   36897 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:31:16.577558   36897 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0513 17:31:16.577613   36897 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/config.json ...
	I0513 17:31:16.578092   36897 start.go:360] acquireMachinesLock for running-upgrade-056000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:31:16.578126   36897 start.go:364] duration metric: took 27.083µs to acquireMachinesLock for "running-upgrade-056000"
	I0513 17:31:16.578135   36897 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:31:16.578139   36897 fix.go:54] fixHost starting: 
	I0513 17:31:16.578784   36897 fix.go:112] recreateIfNeeded on running-upgrade-056000: state=Running err=<nil>
	W0513 17:31:16.578792   36897 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:31:16.587341   36897 out.go:177] * Updating the running qemu2 "running-upgrade-056000" VM ...
	I0513 17:31:16.591333   36897 machine.go:94] provisionDockerMachine start ...
	I0513 17:31:16.591393   36897 main.go:141] libmachine: Using SSH client type: native
	I0513 17:31:16.591517   36897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104805dd0] 0x104808630 <nil>  [] 0s} localhost 56093 <nil> <nil>}
	I0513 17:31:16.591521   36897 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 17:31:16.644014   36897 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-056000
	
	I0513 17:31:16.644028   36897 buildroot.go:166] provisioning hostname "running-upgrade-056000"
	I0513 17:31:16.644090   36897 main.go:141] libmachine: Using SSH client type: native
	I0513 17:31:16.644209   36897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104805dd0] 0x104808630 <nil>  [] 0s} localhost 56093 <nil> <nil>}
	I0513 17:31:16.644215   36897 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-056000 && echo "running-upgrade-056000" | sudo tee /etc/hostname
	I0513 17:31:16.701057   36897 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-056000
	
	I0513 17:31:16.701104   36897 main.go:141] libmachine: Using SSH client type: native
	I0513 17:31:16.701211   36897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104805dd0] 0x104808630 <nil>  [] 0s} localhost 56093 <nil> <nil>}
	I0513 17:31:16.701219   36897 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-056000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-056000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-056000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 17:31:16.753424   36897 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 17:31:16.753441   36897 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18872-34554/.minikube CaCertPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18872-34554/.minikube}
	I0513 17:31:16.753449   36897 buildroot.go:174] setting up certificates
	I0513 17:31:16.753457   36897 provision.go:84] configureAuth start
	I0513 17:31:16.753461   36897 provision.go:143] copyHostCerts
	I0513 17:31:16.753537   36897 exec_runner.go:144] found /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.pem, removing ...
	I0513 17:31:16.753543   36897 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.pem
	I0513 17:31:16.753653   36897 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.pem (1082 bytes)
	I0513 17:31:16.753805   36897 exec_runner.go:144] found /Users/jenkins/minikube-integration/18872-34554/.minikube/cert.pem, removing ...
	I0513 17:31:16.753808   36897 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18872-34554/.minikube/cert.pem
	I0513 17:31:16.753863   36897 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18872-34554/.minikube/cert.pem (1123 bytes)
	I0513 17:31:16.753991   36897 exec_runner.go:144] found /Users/jenkins/minikube-integration/18872-34554/.minikube/key.pem, removing ...
	I0513 17:31:16.753996   36897 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18872-34554/.minikube/key.pem
	I0513 17:31:16.754044   36897 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18872-34554/.minikube/key.pem (1675 bytes)
	I0513 17:31:16.754144   36897 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-056000 san=[127.0.0.1 localhost minikube running-upgrade-056000]
	I0513 17:31:16.862347   36897 provision.go:177] copyRemoteCerts
	I0513 17:31:16.862397   36897 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 17:31:16.862405   36897 sshutil.go:53] new ssh client: &{IP:localhost Port:56093 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/running-upgrade-056000/id_rsa Username:docker}
	I0513 17:31:16.890968   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0513 17:31:16.897593   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0513 17:31:16.903983   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0513 17:31:16.911381   36897 provision.go:87] duration metric: took 157.923125ms to configureAuth
	I0513 17:31:16.911391   36897 buildroot.go:189] setting minikube options for container-runtime
	I0513 17:31:16.911503   36897 config.go:182] Loaded profile config "running-upgrade-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:31:16.911532   36897 main.go:141] libmachine: Using SSH client type: native
	I0513 17:31:16.911611   36897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104805dd0] 0x104808630 <nil>  [] 0s} localhost 56093 <nil> <nil>}
	I0513 17:31:16.911616   36897 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 17:31:16.966870   36897 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 17:31:16.966882   36897 buildroot.go:70] root file system type: tmpfs
	I0513 17:31:16.966936   36897 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 17:31:16.966987   36897 main.go:141] libmachine: Using SSH client type: native
	I0513 17:31:16.967118   36897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104805dd0] 0x104808630 <nil>  [] 0s} localhost 56093 <nil> <nil>}
	I0513 17:31:16.967151   36897 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 17:31:17.024935   36897 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 17:31:17.024996   36897 main.go:141] libmachine: Using SSH client type: native
	I0513 17:31:17.025106   36897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104805dd0] 0x104808630 <nil>  [] 0s} localhost 56093 <nil> <nil>}
	I0513 17:31:17.025114   36897 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 17:31:17.077912   36897 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 17:31:17.077923   36897 machine.go:97] duration metric: took 486.593917ms to provisionDockerMachine
	I0513 17:31:17.077933   36897 start.go:293] postStartSetup for "running-upgrade-056000" (driver="qemu2")
	I0513 17:31:17.077940   36897 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 17:31:17.077992   36897 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 17:31:17.078001   36897 sshutil.go:53] new ssh client: &{IP:localhost Port:56093 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/running-upgrade-056000/id_rsa Username:docker}
	I0513 17:31:17.107197   36897 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 17:31:17.108527   36897 info.go:137] Remote host: Buildroot 2021.02.12
	I0513 17:31:17.108534   36897 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18872-34554/.minikube/addons for local assets ...
	I0513 17:31:17.108591   36897 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18872-34554/.minikube/files for local assets ...
	I0513 17:31:17.108674   36897 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18872-34554/.minikube/files/etc/ssl/certs/350552.pem -> 350552.pem in /etc/ssl/certs
	I0513 17:31:17.108772   36897 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 17:31:17.111686   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/files/etc/ssl/certs/350552.pem --> /etc/ssl/certs/350552.pem (1708 bytes)
	I0513 17:31:17.118451   36897 start.go:296] duration metric: took 40.513791ms for postStartSetup
	I0513 17:31:17.118466   36897 fix.go:56] duration metric: took 540.33825ms for fixHost
	I0513 17:31:17.118495   36897 main.go:141] libmachine: Using SSH client type: native
	I0513 17:31:17.118593   36897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104805dd0] 0x104808630 <nil>  [] 0s} localhost 56093 <nil> <nil>}
	I0513 17:31:17.118600   36897 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0513 17:31:17.169992   36897 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715646677.288172889
	
	I0513 17:31:17.169999   36897 fix.go:216] guest clock: 1715646677.288172889
	I0513 17:31:17.170003   36897 fix.go:229] Guest: 2024-05-13 17:31:17.288172889 -0700 PDT Remote: 2024-05-13 17:31:17.118467 -0700 PDT m=+0.653188459 (delta=169.705889ms)
	I0513 17:31:17.170014   36897 fix.go:200] guest clock delta is within tolerance: 169.705889ms
	I0513 17:31:17.170017   36897 start.go:83] releasing machines lock for "running-upgrade-056000", held for 591.898375ms
	I0513 17:31:17.170080   36897 ssh_runner.go:195] Run: cat /version.json
	I0513 17:31:17.170090   36897 sshutil.go:53] new ssh client: &{IP:localhost Port:56093 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/running-upgrade-056000/id_rsa Username:docker}
	I0513 17:31:17.170080   36897 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 17:31:17.170135   36897 sshutil.go:53] new ssh client: &{IP:localhost Port:56093 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/running-upgrade-056000/id_rsa Username:docker}
	W0513 17:31:17.170678   36897 sshutil.go:64] dial failure (will retry): dial tcp [::1]:56093: connect: connection refused
	I0513 17:31:17.170704   36897 retry.go:31] will retry after 231.803012ms: dial tcp [::1]:56093: connect: connection refused
	W0513 17:31:17.197577   36897 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0513 17:31:17.197629   36897 ssh_runner.go:195] Run: systemctl --version
	I0513 17:31:17.200116   36897 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0513 17:31:17.201836   36897 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 17:31:17.201859   36897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0513 17:31:17.204610   36897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0513 17:31:17.209176   36897 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 17:31:17.209182   36897 start.go:494] detecting cgroup driver to use...
	I0513 17:31:17.209300   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 17:31:17.214437   36897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0513 17:31:17.217209   36897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 17:31:17.220417   36897 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 17:31:17.220437   36897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 17:31:17.224049   36897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 17:31:17.227396   36897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 17:31:17.230114   36897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 17:31:17.232844   36897 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 17:31:17.236249   36897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 17:31:17.239855   36897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 17:31:17.242924   36897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 17:31:17.245755   36897 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 17:31:17.248593   36897 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 17:31:17.251777   36897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:31:17.322755   36897 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 17:31:17.333076   36897 start.go:494] detecting cgroup driver to use...
	I0513 17:31:17.333151   36897 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 17:31:17.342749   36897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 17:31:17.347643   36897 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 17:31:17.352976   36897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 17:31:17.357529   36897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 17:31:17.362381   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 17:31:17.367213   36897 ssh_runner.go:195] Run: which cri-dockerd
	I0513 17:31:17.368463   36897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 17:31:17.370975   36897 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 17:31:17.376258   36897 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 17:31:17.470595   36897 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 17:31:17.546510   36897 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 17:31:17.546564   36897 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 17:31:17.553055   36897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:31:17.635321   36897 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 17:31:30.502834   36897 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.867751583s)
	I0513 17:31:30.502901   36897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 17:31:30.507688   36897 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0513 17:31:30.514422   36897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 17:31:30.519994   36897 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 17:31:30.604415   36897 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 17:31:30.669234   36897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:31:30.738072   36897 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 17:31:30.743901   36897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 17:31:30.748452   36897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:31:30.818582   36897 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 17:31:30.858205   36897 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 17:31:30.858278   36897 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 17:31:30.860679   36897 start.go:562] Will wait 60s for crictl version
	I0513 17:31:30.860735   36897 ssh_runner.go:195] Run: which crictl
	I0513 17:31:30.862114   36897 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 17:31:30.874222   36897 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0513 17:31:30.874292   36897 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 17:31:30.887040   36897 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 17:31:30.906863   36897 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0513 17:31:30.907007   36897 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0513 17:31:30.908423   36897 kubeadm.go:877] updating cluster {Name:running-upgrade-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56125 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-056000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0513 17:31:30.908471   36897 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0513 17:31:30.908506   36897 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 17:31:30.919052   36897 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 17:31:30.919060   36897 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0513 17:31:30.919099   36897 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 17:31:30.922302   36897 ssh_runner.go:195] Run: which lz4
	I0513 17:31:30.923483   36897 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0513 17:31:30.924740   36897 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0513 17:31:30.924751   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0513 17:31:31.585034   36897 docker.go:649] duration metric: took 661.595042ms to copy over tarball
	I0513 17:31:31.585090   36897 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0513 17:31:33.738352   36897 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153292625s)
	I0513 17:31:33.738367   36897 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0513 17:31:33.754269   36897 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 17:31:33.757356   36897 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0513 17:31:33.762405   36897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:31:33.827929   36897 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 17:31:35.207509   36897 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.379592416s)
	I0513 17:31:35.207589   36897 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 17:31:35.218036   36897 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 17:31:35.218045   36897 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0513 17:31:35.218050   36897 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0513 17:31:35.225992   36897 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:31:35.226085   36897 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:31:35.226250   36897 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:31:35.226325   36897 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:31:35.226418   36897 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0513 17:31:35.226755   36897 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0513 17:31:35.226906   36897 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:31:35.226917   36897 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:31:35.234930   36897 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:31:35.235051   36897 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:31:35.235344   36897 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:31:35.236281   36897 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:31:35.237002   36897 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0513 17:31:35.237076   36897 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:31:35.237132   36897 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:31:35.237231   36897 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	W0513 17:31:35.690362   36897 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0513 17:31:35.690516   36897 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:31:35.695920   36897 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:31:35.703258   36897 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:31:35.707303   36897 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0513 17:31:35.707326   36897 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:31:35.707368   36897 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:31:35.709012   36897 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:31:35.709656   36897 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0513 17:31:35.715377   36897 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0513 17:31:35.715400   36897 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:31:35.715449   36897 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:31:35.718126   36897 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0513 17:31:35.718144   36897 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:31:35.718184   36897 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:31:35.740168   36897 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0513 17:31:35.740188   36897 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0513 17:31:35.740237   36897 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0513 17:31:35.740297   36897 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0513 17:31:35.740394   36897 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0513 17:31:35.748066   36897 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0513 17:31:35.748087   36897 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:31:35.748135   36897 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:31:35.748904   36897 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:31:35.753442   36897 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0513 17:31:35.753504   36897 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0513 17:31:35.758292   36897 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0513 17:31:35.760252   36897 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0513 17:31:35.760267   36897 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0513 17:31:35.760279   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0513 17:31:35.760363   36897 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0513 17:31:35.764323   36897 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0513 17:31:35.781320   36897 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0513 17:31:35.781342   36897 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0513 17:31:35.781374   36897 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0513 17:31:35.781391   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0513 17:31:35.781397   36897 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0513 17:31:35.781320   36897 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0513 17:31:35.781412   36897 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:31:35.781426   36897 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:31:35.816024   36897 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0513 17:31:35.816024   36897 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0513 17:31:35.816158   36897 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0513 17:31:35.836225   36897 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0513 17:31:35.836256   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0513 17:31:35.857244   36897 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0513 17:31:35.857261   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0513 17:31:35.979652   36897 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0513 17:31:35.979676   36897 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0513 17:31:35.979684   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0513 17:31:36.067316   36897 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0513 17:31:36.067340   36897 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0513 17:31:36.067349   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0513 17:31:36.171267   36897 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0513 17:31:36.171371   36897 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:31:36.201692   36897 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0513 17:31:36.201712   36897 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0513 17:31:36.201734   36897 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:31:36.201788   36897 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:31:36.663883   36897 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0513 17:31:36.664166   36897 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0513 17:31:36.668884   36897 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0513 17:31:36.668925   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0513 17:31:36.718917   36897 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0513 17:31:36.718932   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0513 17:31:37.011354   36897 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0513 17:31:37.011401   36897 cache_images.go:92] duration metric: took 1.793381333s to LoadCachedImages
	W0513 17:31:37.011447   36897 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0513 17:31:37.011453   36897 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0513 17:31:37.011498   36897 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-056000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-056000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 17:31:37.011559   36897 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0513 17:31:37.052907   36897 cni.go:84] Creating CNI manager for ""
	I0513 17:31:37.052922   36897 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:31:37.052927   36897 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0513 17:31:37.052936   36897 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-056000 NodeName:running-upgrade-056000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0513 17:31:37.053008   36897 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-056000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0513 17:31:37.053069   36897 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0513 17:31:37.057313   36897 binaries.go:44] Found k8s binaries, skipping transfer
	I0513 17:31:37.057359   36897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0513 17:31:37.060632   36897 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0513 17:31:37.067793   36897 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 17:31:37.081593   36897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0513 17:31:37.098142   36897 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0513 17:31:37.099907   36897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:31:37.228324   36897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 17:31:37.234252   36897 certs.go:68] Setting up /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000 for IP: 10.0.2.15
	I0513 17:31:37.234275   36897 certs.go:194] generating shared ca certs ...
	I0513 17:31:37.234283   36897 certs.go:226] acquiring lock for ca certs: {Name:mk4bcf4fefcc4c80b8079c869e5ba8b057091109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:31:37.234554   36897 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.key
	I0513 17:31:37.234589   36897 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/proxy-client-ca.key
	I0513 17:31:37.234594   36897 certs.go:256] generating profile certs ...
	I0513 17:31:37.234651   36897 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/client.key
	I0513 17:31:37.234663   36897 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/apiserver.key.101bd405
	I0513 17:31:37.234674   36897 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/apiserver.crt.101bd405 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0513 17:31:37.343193   36897 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/apiserver.crt.101bd405 ...
	I0513 17:31:37.343207   36897 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/apiserver.crt.101bd405: {Name:mk0a575eb7b041a9ab2f6ce08a661ba60c98b8fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:31:37.343516   36897 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/apiserver.key.101bd405 ...
	I0513 17:31:37.343522   36897 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/apiserver.key.101bd405: {Name:mkd800d46a87bbe3db39b24904c295d8b41adf46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:31:37.343644   36897 certs.go:381] copying /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/apiserver.crt.101bd405 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/apiserver.crt
	I0513 17:31:37.343781   36897 certs.go:385] copying /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/apiserver.key.101bd405 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/apiserver.key
	I0513 17:31:37.343914   36897 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/proxy-client.key
	I0513 17:31:37.344035   36897 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/35055.pem (1338 bytes)
	W0513 17:31:37.344056   36897 certs.go:480] ignoring /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/35055_empty.pem, impossibly tiny 0 bytes
	I0513 17:31:37.344062   36897 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca-key.pem (1675 bytes)
	I0513 17:31:37.344083   36897 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem (1082 bytes)
	I0513 17:31:37.344102   36897 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem (1123 bytes)
	I0513 17:31:37.344119   36897 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/key.pem (1675 bytes)
	I0513 17:31:37.344157   36897 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/files/etc/ssl/certs/350552.pem (1708 bytes)
	I0513 17:31:37.344503   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 17:31:37.367499   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0513 17:31:37.380061   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 17:31:37.389252   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0513 17:31:37.401170   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0513 17:31:37.412127   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0513 17:31:37.422965   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 17:31:37.433267   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0513 17:31:37.449306   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/35055.pem --> /usr/share/ca-certificates/35055.pem (1338 bytes)
	I0513 17:31:37.456840   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/files/etc/ssl/certs/350552.pem --> /usr/share/ca-certificates/350552.pem (1708 bytes)
	I0513 17:31:37.466656   36897 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 17:31:37.475717   36897 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0513 17:31:37.480798   36897 ssh_runner.go:195] Run: openssl version
	I0513 17:31:37.485209   36897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 17:31:37.494350   36897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 17:31:37.495795   36897 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 14 00:31 /usr/share/ca-certificates/minikubeCA.pem
	I0513 17:31:37.495813   36897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 17:31:37.497523   36897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 17:31:37.504614   36897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/35055.pem && ln -fs /usr/share/ca-certificates/35055.pem /etc/ssl/certs/35055.pem"
	I0513 17:31:37.507407   36897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35055.pem
	I0513 17:31:37.508924   36897 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 14 00:19 /usr/share/ca-certificates/35055.pem
	I0513 17:31:37.508947   36897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35055.pem
	I0513 17:31:37.510691   36897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/35055.pem /etc/ssl/certs/51391683.0"
	I0513 17:31:37.516812   36897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/350552.pem && ln -fs /usr/share/ca-certificates/350552.pem /etc/ssl/certs/350552.pem"
	I0513 17:31:37.523868   36897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/350552.pem
	I0513 17:31:37.525261   36897 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 14 00:19 /usr/share/ca-certificates/350552.pem
	I0513 17:31:37.525279   36897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/350552.pem
	I0513 17:31:37.527286   36897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/350552.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 17:31:37.529894   36897 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 17:31:37.531280   36897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0513 17:31:37.533042   36897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0513 17:31:37.534647   36897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0513 17:31:37.536346   36897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0513 17:31:37.540073   36897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0513 17:31:37.549486   36897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0513 17:31:37.551222   36897 kubeadm.go:391] StartCluster: {Name:running-upgrade-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56125 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-056000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0513 17:31:37.551290   36897 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 17:31:37.592460   36897 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0513 17:31:37.595744   36897 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0513 17:31:37.595751   36897 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0513 17:31:37.595754   36897 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0513 17:31:37.595782   36897 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0513 17:31:37.604116   36897 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0513 17:31:37.604154   36897 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-056000" does not appear in /Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:31:37.604168   36897 kubeconfig.go:62] /Users/jenkins/minikube-integration/18872-34554/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-056000" cluster setting kubeconfig missing "running-upgrade-056000" context setting]
	I0513 17:31:37.604358   36897 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/kubeconfig: {Name:mk4053cf25e56f4e4112583f80c31fb87a4c6322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:31:37.605298   36897 kapi.go:59] client config for running-upgrade-056000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/client.key", CAFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b8de10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 17:31:37.606137   36897 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0513 17:31:37.612299   36897 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-056000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0513 17:31:37.612303   36897 kubeadm.go:1154] stopping kube-system containers ...
	I0513 17:31:37.612338   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 17:31:37.676824   36897 docker.go:483] Stopping containers: [8176cd4f3d53 7fc083126b07 5b1425b81ae0 f770326bb15c 3dba3dfb96a4 ee0d6f1444e8 00daea211ab6 0a133731bd14 6202ce164016 390352518a60 15dcaa442a6a 573b62a7ff6c b8a0d562dc85 eaba027fa937 9df9c587a04c 82d51bb27205 02872884bfcb 54a02847c526 81af5399b913]
	I0513 17:31:37.676906   36897 ssh_runner.go:195] Run: docker stop 8176cd4f3d53 7fc083126b07 5b1425b81ae0 f770326bb15c 3dba3dfb96a4 ee0d6f1444e8 00daea211ab6 0a133731bd14 6202ce164016 390352518a60 15dcaa442a6a 573b62a7ff6c b8a0d562dc85 eaba027fa937 9df9c587a04c 82d51bb27205 02872884bfcb 54a02847c526 81af5399b913
	I0513 17:31:47.807112   36897 ssh_runner.go:235] Completed: docker stop 8176cd4f3d53 7fc083126b07 5b1425b81ae0 f770326bb15c 3dba3dfb96a4 ee0d6f1444e8 00daea211ab6 0a133731bd14 6202ce164016 390352518a60 15dcaa442a6a 573b62a7ff6c b8a0d562dc85 eaba027fa937 9df9c587a04c 82d51bb27205 02872884bfcb 54a02847c526 81af5399b913: (10.130384625s)
	I0513 17:31:47.807216   36897 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0513 17:31:47.911341   36897 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0513 17:31:47.918087   36897 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 May 14 00:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 May 14 00:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 May 14 00:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 May 14 00:31 /etc/kubernetes/scheduler.conf
	
	I0513 17:31:47.918141   36897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/admin.conf
	I0513 17:31:47.923258   36897 kubeadm.go:162] "https://control-plane.minikube.internal:56125" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0513 17:31:47.923298   36897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0513 17:31:47.927761   36897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/kubelet.conf
	I0513 17:31:47.931704   36897 kubeadm.go:162] "https://control-plane.minikube.internal:56125" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0513 17:31:47.931739   36897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0513 17:31:47.935657   36897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/controller-manager.conf
	I0513 17:31:47.939154   36897 kubeadm.go:162] "https://control-plane.minikube.internal:56125" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0513 17:31:47.939187   36897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0513 17:31:47.942495   36897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/scheduler.conf
	I0513 17:31:47.945646   36897 kubeadm.go:162] "https://control-plane.minikube.internal:56125" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0513 17:31:47.945669   36897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0513 17:31:47.949086   36897 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0513 17:31:47.952430   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:31:47.985137   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:31:48.502412   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:31:48.678731   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:31:48.704702   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:31:48.727727   36897 api_server.go:52] waiting for apiserver process to appear ...
	I0513 17:31:48.727786   36897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:31:49.230129   36897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:31:49.729083   36897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:31:50.229804   36897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:31:50.234173   36897 api_server.go:72] duration metric: took 1.506480792s to wait for apiserver process to appear ...
	I0513 17:31:50.234185   36897 api_server.go:88] waiting for apiserver healthz status ...
	I0513 17:31:50.234212   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:31:55.236213   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:31:55.236258   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:32:00.236511   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:32:00.236567   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:32:05.236932   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:32:05.236984   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:32:10.237497   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:32:10.237580   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:32:15.238972   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:32:15.239041   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:32:20.240115   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:32:20.240195   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:32:25.241932   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:32:25.241978   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:32:30.242751   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:32:30.242867   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:32:35.245249   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:32:35.245294   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:32:40.247509   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:32:40.247588   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:32:45.250200   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:32:45.250283   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:32:50.251331   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:32:50.251774   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:32:50.294571   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:32:50.294711   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:32:50.316848   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:32:50.316938   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:32:50.331295   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:32:50.331372   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:32:50.343494   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:32:50.343562   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:32:50.354102   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:32:50.354178   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:32:50.364882   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:32:50.364950   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:32:50.374496   36897 logs.go:276] 0 containers: []
	W0513 17:32:50.374505   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:32:50.374561   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:32:50.385126   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:32:50.385141   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:32:50.385146   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:32:50.397264   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:32:50.397275   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:32:50.402233   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:32:50.402245   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:32:50.473454   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:32:50.473467   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:32:50.494136   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:32:50.494145   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:32:50.511187   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:32:50.511197   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:32:50.523051   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:32:50.523063   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:32:50.564170   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:32:50.564184   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:32:50.583108   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:32:50.583121   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:32:50.594165   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:32:50.594176   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:32:50.607753   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:32:50.607767   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:32:50.619321   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:32:50.619332   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:32:50.646095   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:32:50.646106   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:32:50.657639   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:32:50.657650   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:32:50.672207   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:32:50.672218   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:32:50.683505   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:32:50.683517   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:32:50.700236   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:32:50.700252   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:32:53.214507   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:32:58.216851   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:32:58.217297   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:32:58.260261   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:32:58.260389   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:32:58.282846   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:32:58.282959   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:32:58.298760   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:32:58.298830   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:32:58.316242   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:32:58.316318   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:32:58.326433   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:32:58.326495   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:32:58.341967   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:32:58.342039   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:32:58.351777   36897 logs.go:276] 0 containers: []
	W0513 17:32:58.351787   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:32:58.351840   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:32:58.362141   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:32:58.362158   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:32:58.362163   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:32:58.376171   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:32:58.376181   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:32:58.388113   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:32:58.388126   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:32:58.399598   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:32:58.399607   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:32:58.410496   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:32:58.410507   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:32:58.422395   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:32:58.422409   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:32:58.460631   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:32:58.460638   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:32:58.497084   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:32:58.497098   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:32:58.511393   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:32:58.511403   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:32:58.526022   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:32:58.526034   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:32:58.542273   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:32:58.542283   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:32:58.554302   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:32:58.554317   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:32:58.572493   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:32:58.572505   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:32:58.584098   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:32:58.584111   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:32:58.600232   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:32:58.600241   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:32:58.604523   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:32:58.604533   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:32:58.623858   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:32:58.623868   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:33:01.151830   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:33:06.154270   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:33:06.154549   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:33:06.185029   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:33:06.185184   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:33:06.203408   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:33:06.203513   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:33:06.221454   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:33:06.221522   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:33:06.232853   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:33:06.232925   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:33:06.249311   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:33:06.249375   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:33:06.259763   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:33:06.259827   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:33:06.271133   36897 logs.go:276] 0 containers: []
	W0513 17:33:06.271144   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:33:06.271198   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:33:06.281765   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:33:06.281784   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:33:06.281789   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:33:06.296515   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:33:06.296525   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:33:06.307762   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:33:06.307772   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:33:06.320529   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:33:06.320540   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:33:06.341116   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:33:06.341127   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:33:06.352930   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:33:06.352940   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:33:06.364111   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:33:06.364122   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:33:06.388974   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:33:06.388981   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:33:06.422719   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:33:06.422733   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:33:06.443810   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:33:06.443822   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:33:06.460508   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:33:06.460521   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:33:06.472302   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:33:06.472314   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:33:06.512227   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:33:06.512234   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:33:06.524170   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:33:06.524179   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:33:06.535187   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:33:06.535199   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:33:06.539622   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:33:06.539628   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:33:06.554916   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:33:06.554927   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:33:09.069569   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:33:14.071804   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:33:14.072001   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:33:14.093242   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:33:14.093326   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:33:14.107834   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:33:14.107899   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:33:14.119767   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:33:14.119828   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:33:14.130367   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:33:14.130430   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:33:14.144907   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:33:14.144966   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:33:14.155098   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:33:14.155155   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:33:14.165182   36897 logs.go:276] 0 containers: []
	W0513 17:33:14.165193   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:33:14.165246   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:33:14.175559   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:33:14.175576   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:33:14.175581   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:33:14.187501   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:33:14.187511   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:33:14.204155   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:33:14.204178   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:33:14.230014   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:33:14.230022   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:33:14.241907   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:33:14.241920   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:33:14.281772   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:33:14.281782   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:33:14.316208   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:33:14.316219   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:33:14.327794   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:33:14.327807   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:33:14.343496   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:33:14.343509   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:33:14.357089   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:33:14.357101   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:33:14.375062   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:33:14.375073   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:33:14.386070   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:33:14.386083   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:33:14.390614   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:33:14.390622   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:33:14.406093   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:33:14.406103   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:33:14.417927   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:33:14.417939   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:33:14.429155   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:33:14.429164   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:33:14.444332   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:33:14.444343   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:33:16.958755   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:33:21.961542   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:33:21.961935   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:33:21.997165   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:33:21.997294   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:33:22.021115   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:33:22.021216   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:33:22.035984   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:33:22.036057   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:33:22.048258   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:33:22.048323   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:33:22.058890   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:33:22.058959   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:33:22.069774   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:33:22.069834   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:33:22.080510   36897 logs.go:276] 0 containers: []
	W0513 17:33:22.080534   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:33:22.080589   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:33:22.091539   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:33:22.091557   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:33:22.091562   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:33:22.109670   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:33:22.109680   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:33:22.121554   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:33:22.121568   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:33:22.133463   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:33:22.133477   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:33:22.147945   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:33:22.147955   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:33:22.173359   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:33:22.173366   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:33:22.187997   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:33:22.188008   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:33:22.228065   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:33:22.228192   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:33:22.233547   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:33:22.233558   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:33:22.269552   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:33:22.269565   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:33:22.283145   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:33:22.283156   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:33:22.294917   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:33:22.294928   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:33:22.310529   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:33:22.310541   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:33:22.322187   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:33:22.322197   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:33:22.336101   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:33:22.336111   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:33:22.347041   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:33:22.347052   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:33:22.358674   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:33:22.358683   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:33:24.875604   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:33:29.877964   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:33:29.878341   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:33:29.915581   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:33:29.915722   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:33:29.939217   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:33:29.939326   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:33:29.960006   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:33:29.960076   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:33:29.973063   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:33:29.973137   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:33:29.984353   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:33:29.984425   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:33:29.995470   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:33:29.995539   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:33:30.005881   36897 logs.go:276] 0 containers: []
	W0513 17:33:30.005891   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:33:30.005944   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:33:30.016762   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:33:30.016780   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:33:30.016785   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:33:30.057217   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:33:30.057228   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:33:30.068837   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:33:30.068846   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:33:30.093073   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:33:30.093080   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:33:30.105629   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:33:30.105642   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:33:30.118443   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:33:30.118454   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:33:30.132301   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:33:30.132311   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:33:30.143761   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:33:30.143774   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:33:30.154946   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:33:30.154956   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:33:30.169394   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:33:30.169404   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:33:30.183896   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:33:30.183905   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:33:30.194958   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:33:30.194969   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:33:30.206304   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:33:30.206319   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:33:30.210670   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:33:30.210677   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:33:30.244217   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:33:30.244227   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:33:30.259844   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:33:30.259855   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:33:30.277485   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:33:30.277497   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:33:32.790929   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:33:37.793394   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:33:37.793575   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:33:37.815772   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:33:37.815854   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:33:37.831505   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:33:37.831588   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:33:37.844449   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:33:37.844516   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:33:37.855925   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:33:37.855993   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:33:37.866179   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:33:37.866241   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:33:37.880929   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:33:37.880996   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:33:37.891846   36897 logs.go:276] 0 containers: []
	W0513 17:33:37.891859   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:33:37.891912   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:33:37.907494   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:33:37.907514   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:33:37.907519   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:33:37.932267   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:33:37.932283   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:33:37.972816   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:33:37.972827   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:33:37.985360   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:33:37.985376   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:33:38.002064   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:33:38.002076   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:33:38.015548   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:33:38.015560   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:33:38.030712   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:33:38.030722   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:33:38.042648   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:33:38.042658   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:33:38.055147   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:33:38.055158   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:33:38.067589   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:33:38.067600   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:33:38.079265   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:33:38.079276   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:33:38.091421   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:33:38.091432   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:33:38.103049   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:33:38.103060   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:33:38.117641   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:33:38.117655   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:33:38.135442   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:33:38.135455   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:33:38.139631   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:33:38.139638   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:33:38.177248   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:33:38.177259   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:33:40.693645   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:33:45.696363   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:33:45.696596   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:33:45.708816   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:33:45.708887   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:33:45.720808   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:33:45.720880   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:33:45.731721   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:33:45.731779   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:33:45.742301   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:33:45.742369   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:33:45.753125   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:33:45.753194   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:33:45.763883   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:33:45.763949   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:33:45.774003   36897 logs.go:276] 0 containers: []
	W0513 17:33:45.774016   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:33:45.774066   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:33:45.788939   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:33:45.788959   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:33:45.788964   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:33:45.803068   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:33:45.803079   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:33:45.820567   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:33:45.820578   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:33:45.847002   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:33:45.847010   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:33:45.858707   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:33:45.858720   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:33:45.863034   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:33:45.863040   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:33:45.898791   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:33:45.898803   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:33:45.913555   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:33:45.913566   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:33:45.929678   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:33:45.929689   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:33:45.941273   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:33:45.941283   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:33:45.954802   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:33:45.954814   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:33:45.966589   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:33:45.966601   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:33:45.978966   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:33:45.978976   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:33:45.994431   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:33:45.994442   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:33:46.005583   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:33:46.005594   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:33:46.046965   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:33:46.046976   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:33:46.066971   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:33:46.066981   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:33:48.579609   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:33:53.581855   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:33:53.581994   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:33:53.596479   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:33:53.596552   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:33:53.607371   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:33:53.607437   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:33:53.620054   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:33:53.620127   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:33:53.630939   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:33:53.631011   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:33:53.642140   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:33:53.642209   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:33:53.653422   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:33:53.653488   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:33:53.664118   36897 logs.go:276] 0 containers: []
	W0513 17:33:53.664128   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:33:53.664179   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:33:53.674865   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:33:53.674885   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:33:53.674891   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:33:53.687783   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:33:53.687795   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:33:53.706103   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:33:53.706114   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:33:53.717768   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:33:53.717781   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:33:53.729829   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:33:53.729841   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:33:53.756250   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:33:53.756258   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:33:53.768663   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:33:53.768675   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:33:53.773646   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:33:53.773655   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:33:53.789500   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:33:53.789511   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:33:53.829654   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:33:53.829667   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:33:53.844729   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:33:53.844741   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:33:53.860018   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:33:53.860032   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:33:53.871072   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:33:53.871083   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:33:53.883333   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:33:53.883344   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:33:53.899681   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:33:53.899692   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:33:53.911580   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:33:53.911592   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:33:53.954991   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:33:53.955004   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:33:56.468950   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:01.471050   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:01.471168   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:01.483862   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:01.487766   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:01.501825   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:01.501902   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:01.514214   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:01.514295   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:01.526194   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:01.526267   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:01.539903   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:01.539978   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:01.563129   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:01.563201   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:01.575570   36897 logs.go:276] 0 containers: []
	W0513 17:34:01.575596   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:01.575709   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:01.589076   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:01.589095   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:01.589108   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:01.629846   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:01.629861   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:01.645401   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:01.645414   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:01.658275   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:01.658288   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:01.685027   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:01.685049   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:01.701220   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:01.701232   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:01.715932   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:01.715949   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:01.734363   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:01.734376   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:01.746868   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:01.746881   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:01.759859   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:01.759871   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:01.764956   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:01.764969   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:01.778120   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:01.778133   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:01.800532   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:01.800546   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:01.813126   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:01.813139   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:01.855200   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:01.855217   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:01.871476   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:01.871487   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:01.890314   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:01.890329   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:04.406315   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:09.408522   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:09.408756   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:09.432980   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:09.433073   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:09.447480   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:09.447560   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:09.460031   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:09.460104   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:09.470974   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:09.471050   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:09.481247   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:09.481306   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:09.492009   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:09.492083   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:09.502011   36897 logs.go:276] 0 containers: []
	W0513 17:34:09.502027   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:09.502086   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:09.512408   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:09.512429   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:09.512435   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:09.526551   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:09.526563   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:09.538139   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:09.538152   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:09.553515   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:09.553528   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:09.588797   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:09.588810   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:09.601078   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:09.601089   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:09.625901   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:09.625914   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:09.638279   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:09.638289   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:09.649835   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:09.649845   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:09.667567   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:09.667579   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:09.683494   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:09.683508   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:09.688154   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:09.688161   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:09.702627   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:09.702637   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:09.726215   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:09.726226   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:09.764901   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:09.764910   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:09.779809   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:09.779819   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:09.798040   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:09.798050   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:12.310071   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:17.312186   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:17.312278   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:17.326345   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:17.326423   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:17.337113   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:17.337183   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:17.347928   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:17.347991   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:17.358014   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:17.358085   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:17.369127   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:17.369203   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:17.381308   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:17.381403   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:17.394159   36897 logs.go:276] 0 containers: []
	W0513 17:34:17.394172   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:17.394254   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:17.405655   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:17.405673   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:17.405678   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:17.424373   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:17.424386   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:17.440136   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:17.440157   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:17.454102   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:17.454116   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:17.467059   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:17.467072   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:17.479657   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:17.479677   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:17.506197   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:17.506217   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:17.549431   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:17.549447   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:17.565862   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:17.565875   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:17.579146   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:17.579160   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:17.591064   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:17.591077   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:17.627670   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:17.627683   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:17.645636   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:17.645650   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:17.657488   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:17.657506   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:17.670351   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:17.670364   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:17.675335   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:17.675352   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:17.691239   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:17.691253   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:20.207336   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:25.209656   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:25.209875   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:25.228036   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:25.228131   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:25.242192   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:25.242262   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:25.258435   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:25.258504   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:25.268799   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:25.268858   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:25.279719   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:25.279785   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:25.295186   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:25.295249   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:25.304772   36897 logs.go:276] 0 containers: []
	W0513 17:34:25.304782   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:25.304831   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:25.315015   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:25.315033   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:25.315038   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:25.352877   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:25.352884   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:25.387080   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:25.387093   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:25.400706   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:25.400719   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:25.413333   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:25.413346   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:25.431036   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:25.431049   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:25.442830   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:25.442843   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:25.454148   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:25.454159   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:25.468971   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:25.468983   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:25.492011   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:25.492020   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:25.502839   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:25.502850   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:25.518345   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:25.518357   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:25.522690   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:25.522698   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:25.536509   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:25.536518   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:25.550880   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:25.550889   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:25.562800   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:25.562811   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:25.574159   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:25.574170   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:28.095373   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:33.097565   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:33.097680   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:33.110936   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:33.111014   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:33.123906   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:33.123981   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:33.137083   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:33.137154   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:33.149938   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:33.150013   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:33.162432   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:33.162504   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:33.177342   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:33.177409   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:33.194252   36897 logs.go:276] 0 containers: []
	W0513 17:34:33.194264   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:33.194329   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:33.206508   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:33.206527   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:33.206533   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:33.222559   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:33.222572   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:33.236077   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:33.236090   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:33.250396   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:33.250410   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:33.269928   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:33.269940   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:33.285653   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:33.285666   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:33.323364   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:33.323377   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:33.328255   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:33.328265   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:33.341061   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:33.341074   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:33.355226   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:33.355236   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:33.367409   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:33.367421   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:33.384343   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:33.384357   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:33.428080   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:33.428097   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:33.444784   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:33.444796   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:33.456392   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:33.456407   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:33.469593   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:33.469606   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:33.496166   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:33.496184   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:36.013631   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:41.015850   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:41.016361   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:41.054490   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:41.054627   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:41.075230   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:41.075335   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:41.090717   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:41.090799   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:41.106418   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:41.106500   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:41.117660   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:41.117727   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:41.128427   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:41.128498   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:41.138339   36897 logs.go:276] 0 containers: []
	W0513 17:34:41.138350   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:41.138406   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:41.149188   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:41.149206   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:41.149212   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:41.187982   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:41.187992   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:41.199221   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:41.199236   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:41.213199   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:41.213210   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:41.228004   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:41.228014   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:41.250537   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:41.250548   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:41.286047   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:41.286061   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:41.305283   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:41.305295   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:41.317273   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:41.317284   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:41.335128   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:41.335140   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:41.347084   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:41.347097   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:41.351589   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:41.351599   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:41.365304   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:41.365315   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:41.377433   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:41.377443   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:41.389400   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:41.389410   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:41.405282   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:41.405293   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:41.417068   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:41.417079   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:43.931236   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:48.931667   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:48.931743   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:48.945986   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:48.946041   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:48.963491   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:48.963541   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:48.974902   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:48.974969   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:48.987808   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:48.987863   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:48.999863   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:48.999908   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:49.010959   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:49.011013   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:49.022474   36897 logs.go:276] 0 containers: []
	W0513 17:34:49.022485   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:49.022520   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:49.034595   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:49.034619   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:49.034625   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:49.054170   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:49.054183   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:49.066830   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:49.066841   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:49.104742   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:49.104755   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:49.120239   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:49.120250   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:49.134023   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:49.134036   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:49.149198   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:49.149209   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:49.154100   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:49.154114   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:49.170410   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:49.170424   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:49.186556   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:49.186571   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:49.206064   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:49.206078   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:49.231625   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:49.231639   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:49.244807   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:49.244821   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:49.286275   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:49.286288   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:49.298049   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:49.298060   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:49.320813   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:49.320824   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:49.334588   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:49.334602   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:51.850022   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:56.852122   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:56.852282   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:56.863945   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:56.864018   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:56.874373   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:56.874441   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:56.885401   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:56.885467   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:56.895919   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:56.895984   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:56.906529   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:56.906603   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:56.916917   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:56.916987   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:56.927015   36897 logs.go:276] 0 containers: []
	W0513 17:34:56.927028   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:56.927091   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:56.937822   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:56.937844   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:56.937850   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:56.948965   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:56.948982   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:56.963130   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:56.963139   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:56.976263   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:56.976276   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:56.987460   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:56.987471   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:56.999037   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:56.999047   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:57.015029   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:57.015043   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:57.026317   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:57.026332   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:57.037707   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:57.037720   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:57.042021   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:57.042027   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:57.056906   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:57.056917   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:57.075080   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:57.075092   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:57.088741   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:57.088750   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:57.104620   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:57.104631   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:57.129876   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:57.129887   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:57.153178   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:57.153187   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:57.193155   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:57.193164   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:59.731952   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:04.732371   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:04.732545   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:04.746739   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:35:04.746820   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:04.758210   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:35:04.758275   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:04.769014   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:35:04.769080   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:04.779390   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:35:04.779453   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:04.793172   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:35:04.793244   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:04.808339   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:35:04.808403   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:04.818676   36897 logs.go:276] 0 containers: []
	W0513 17:35:04.818687   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:04.818734   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:04.830312   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:35:04.830333   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:35:04.830339   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:35:04.842509   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:35:04.842520   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:35:04.853745   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:04.853756   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:04.892142   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:35:04.892149   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:35:04.905976   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:35:04.905986   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:35:04.918541   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:35:04.918552   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:35:04.929622   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:35:04.929634   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:35:04.947698   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:35:04.947708   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:35:04.960844   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:35:04.960855   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:04.972548   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:04.972559   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:05.008679   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:35:05.008689   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:35:05.023416   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:35:05.023425   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:35:05.039224   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:35:05.039235   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:35:05.050974   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:05.050988   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:05.055324   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:35:05.055330   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:35:05.068553   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:05.068563   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:05.092930   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:35:05.092939   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:35:07.608886   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:12.611043   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:12.611124   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:12.622553   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:35:12.622625   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:12.642554   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:35:12.642619   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:12.653490   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:35:12.653554   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:12.663913   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:35:12.663983   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:12.675190   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:35:12.675257   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:12.685722   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:35:12.685785   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:12.706632   36897 logs.go:276] 0 containers: []
	W0513 17:35:12.706645   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:12.706703   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:12.717312   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:35:12.717329   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:35:12.717335   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:35:12.731551   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:35:12.731562   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:35:12.743347   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:35:12.743357   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:35:12.759216   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:35:12.759229   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:35:12.771054   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:35:12.771064   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:35:12.782451   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:35:12.782467   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:12.794186   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:35:12.794196   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:35:12.807200   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:35:12.807214   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:35:12.819156   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:35:12.819166   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:35:12.833459   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:35:12.833472   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:35:12.847657   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:35:12.847666   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:35:12.859356   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:12.859367   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:12.863619   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:12.863625   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:12.897420   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:35:12.897429   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:35:12.908381   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:12.908390   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:12.931036   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:12.931045   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:12.968730   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:35:12.968740   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:35:15.487162   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:20.489436   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:20.489799   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:20.524037   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:35:20.524173   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:20.549254   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:35:20.549339   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:20.562972   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:35:20.563036   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:20.576093   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:35:20.576175   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:20.586783   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:35:20.586856   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:20.597418   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:35:20.597489   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:20.608164   36897 logs.go:276] 0 containers: []
	W0513 17:35:20.608176   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:20.608238   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:20.618873   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:35:20.618892   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:20.618897   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:20.623292   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:35:20.623302   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:35:20.634531   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:35:20.634541   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:35:20.650743   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:20.650755   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:20.673340   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:35:20.673351   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:35:20.687175   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:35:20.687187   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:35:20.698952   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:35:20.698966   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:35:20.715925   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:35:20.715936   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:35:20.728506   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:20.728517   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:20.769167   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:20.769177   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:20.804242   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:35:20.804256   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:20.824290   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:35:20.824301   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:35:20.835943   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:35:20.835953   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:35:20.849963   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:35:20.849976   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:35:20.863342   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:35:20.863356   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:35:20.877692   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:35:20.877702   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:35:20.889481   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:35:20.889492   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:35:23.402797   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:28.405168   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:28.405548   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:28.437786   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:35:28.437925   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:28.455835   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:35:28.455928   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:28.469078   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:35:28.469160   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:28.481269   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:35:28.481338   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:28.499121   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:35:28.499196   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:28.509759   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:35:28.509829   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:28.520397   36897 logs.go:276] 0 containers: []
	W0513 17:35:28.520406   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:28.520460   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:28.531049   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:35:28.531066   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:35:28.531072   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:35:28.542856   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:35:28.542868   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:35:28.554581   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:35:28.554592   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:35:28.569509   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:35:28.569520   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:35:28.582225   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:35:28.582236   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:35:28.594163   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:35:28.594174   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:35:28.612474   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:35:28.612484   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:28.625070   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:28.625081   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:28.629590   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:35:28.629597   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:35:28.644465   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:35:28.644477   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:35:28.659169   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:35:28.659178   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:35:28.675058   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:28.675068   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:28.714308   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:35:28.714317   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:35:28.727154   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:35:28.727165   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:35:28.738658   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:28.738673   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:28.762456   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:28.762466   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:28.797990   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:35:28.798004   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:35:31.317405   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:36.319622   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:36.319811   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:36.334686   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:35:36.334762   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:36.347536   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:35:36.347603   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:36.358623   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:35:36.358695   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:36.370414   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:35:36.370478   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:36.380857   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:35:36.380925   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:36.391582   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:35:36.391640   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:36.402194   36897 logs.go:276] 0 containers: []
	W0513 17:35:36.402205   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:36.402259   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:36.412575   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:35:36.412594   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:35:36.412600   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:35:36.424139   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:35:36.424152   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:35:36.435464   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:36.435478   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:36.475329   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:36.475337   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:36.479337   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:36.479343   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:36.516564   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:35:36.516578   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:35:36.530394   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:35:36.530404   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:35:36.542100   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:35:36.542113   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:35:36.559686   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:35:36.559695   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:35:36.572212   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:36.572225   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:36.594357   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:35:36.594363   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:35:36.608412   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:35:36.608422   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:35:36.619842   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:35:36.619853   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:35:36.637824   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:35:36.637841   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:35:36.661311   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:35:36.661322   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:35:36.689117   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:35:36.689133   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:35:36.701943   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:35:36.701955   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:39.216185   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:44.218368   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:44.218491   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:44.232757   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:35:44.232836   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:44.243555   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:35:44.243637   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:44.254448   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:35:44.254519   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:44.265094   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:35:44.265167   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:44.277165   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:35:44.277230   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:44.288583   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:35:44.288653   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:44.298499   36897 logs.go:276] 0 containers: []
	W0513 17:35:44.298514   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:44.298580   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:44.309139   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:35:44.309157   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:35:44.309162   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:35:44.324186   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:35:44.324197   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:35:44.339855   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:44.339865   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:44.375141   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:35:44.375151   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:35:44.387698   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:35:44.387709   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:35:44.398919   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:35:44.398931   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:35:44.410415   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:35:44.410431   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:35:44.425303   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:44.425316   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:44.447548   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:35:44.447556   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:35:44.464959   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:35:44.464969   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:35:44.482795   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:35:44.482806   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:35:44.494552   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:35:44.494563   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:35:44.506673   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:35:44.506686   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:35:44.524620   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:35:44.524631   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:44.537882   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:44.537894   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:44.578809   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:44.578818   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:44.583129   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:35:44.583138   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:35:47.096956   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:52.099288   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:52.099420   36897 kubeadm.go:591] duration metric: took 4m14.50874325s to restartPrimaryControlPlane
	W0513 17:35:52.099530   36897 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0513 17:35:52.099583   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0513 17:35:53.121838   36897 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.022258542s)
	I0513 17:35:53.121909   36897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 17:35:53.127363   36897 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0513 17:35:53.130262   36897 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0513 17:35:53.133018   36897 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0513 17:35:53.133024   36897 kubeadm.go:156] found existing configuration files:
	
	I0513 17:35:53.133051   36897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/admin.conf
	I0513 17:35:53.135595   36897 kubeadm.go:162] "https://control-plane.minikube.internal:56125" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0513 17:35:53.135617   36897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0513 17:35:53.138115   36897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/kubelet.conf
	I0513 17:35:53.141288   36897 kubeadm.go:162] "https://control-plane.minikube.internal:56125" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0513 17:35:53.141310   36897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0513 17:35:53.144266   36897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/controller-manager.conf
	I0513 17:35:53.147014   36897 kubeadm.go:162] "https://control-plane.minikube.internal:56125" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0513 17:35:53.147040   36897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0513 17:35:53.149926   36897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/scheduler.conf
	I0513 17:35:53.152809   36897 kubeadm.go:162] "https://control-plane.minikube.internal:56125" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0513 17:35:53.152831   36897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0513 17:35:53.155233   36897 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0513 17:35:53.170895   36897 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0513 17:35:53.170921   36897 kubeadm.go:309] [preflight] Running pre-flight checks
	I0513 17:35:53.217757   36897 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0513 17:35:53.217817   36897 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0513 17:35:53.217893   36897 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0513 17:35:53.267503   36897 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0513 17:35:53.271637   36897 out.go:204]   - Generating certificates and keys ...
	I0513 17:35:53.271667   36897 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0513 17:35:53.271693   36897 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0513 17:35:53.271789   36897 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0513 17:35:53.271846   36897 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0513 17:35:53.271895   36897 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0513 17:35:53.271920   36897 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0513 17:35:53.271964   36897 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0513 17:35:53.271997   36897 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0513 17:35:53.272085   36897 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0513 17:35:53.272119   36897 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0513 17:35:53.272140   36897 kubeadm.go:309] [certs] Using the existing "sa" key
	I0513 17:35:53.272202   36897 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0513 17:35:53.319001   36897 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0513 17:35:53.764616   36897 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0513 17:35:53.806727   36897 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0513 17:35:53.905176   36897 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0513 17:35:53.936178   36897 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0513 17:35:53.936483   36897 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0513 17:35:53.936518   36897 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0513 17:35:54.009364   36897 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0513 17:35:54.012506   36897 out.go:204]   - Booting up control plane ...
	I0513 17:35:54.012549   36897 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0513 17:35:54.012594   36897 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0513 17:35:54.012627   36897 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0513 17:35:54.012675   36897 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0513 17:35:54.012766   36897 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0513 17:35:58.517031   36897 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.505628 seconds
	I0513 17:35:58.517225   36897 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0513 17:35:58.523691   36897 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0513 17:35:59.055138   36897 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0513 17:35:59.055419   36897 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-056000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0513 17:35:59.559215   36897 kubeadm.go:309] [bootstrap-token] Using token: yi4utz.blo7i6p65ke8d3ns
	I0513 17:35:59.563322   36897 out.go:204]   - Configuring RBAC rules ...
	I0513 17:35:59.563404   36897 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0513 17:35:59.563463   36897 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0513 17:35:59.566999   36897 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0513 17:35:59.568219   36897 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0513 17:35:59.569209   36897 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0513 17:35:59.570172   36897 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0513 17:35:59.573622   36897 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0513 17:35:59.716740   36897 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0513 17:35:59.963343   36897 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0513 17:35:59.963796   36897 kubeadm.go:309] 
	I0513 17:35:59.963834   36897 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0513 17:35:59.963840   36897 kubeadm.go:309] 
	I0513 17:35:59.963876   36897 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0513 17:35:59.963878   36897 kubeadm.go:309] 
	I0513 17:35:59.963892   36897 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0513 17:35:59.963925   36897 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0513 17:35:59.963970   36897 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0513 17:35:59.963975   36897 kubeadm.go:309] 
	I0513 17:35:59.964000   36897 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0513 17:35:59.964004   36897 kubeadm.go:309] 
	I0513 17:35:59.964027   36897 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0513 17:35:59.964031   36897 kubeadm.go:309] 
	I0513 17:35:59.964077   36897 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0513 17:35:59.964156   36897 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0513 17:35:59.964206   36897 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0513 17:35:59.964211   36897 kubeadm.go:309] 
	I0513 17:35:59.964278   36897 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0513 17:35:59.964323   36897 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0513 17:35:59.964327   36897 kubeadm.go:309] 
	I0513 17:35:59.964403   36897 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yi4utz.blo7i6p65ke8d3ns \
	I0513 17:35:59.964484   36897 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4d136b3dcf7c79b0a3d022a6d22a2cfa7847863b6538a3d210720b96945b0713 \
	I0513 17:35:59.964505   36897 kubeadm.go:309] 	--control-plane 
	I0513 17:35:59.964509   36897 kubeadm.go:309] 
	I0513 17:35:59.964549   36897 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0513 17:35:59.964552   36897 kubeadm.go:309] 
	I0513 17:35:59.964589   36897 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yi4utz.blo7i6p65ke8d3ns \
	I0513 17:35:59.964643   36897 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4d136b3dcf7c79b0a3d022a6d22a2cfa7847863b6538a3d210720b96945b0713 
	I0513 17:35:59.964735   36897 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0513 17:35:59.964746   36897 cni.go:84] Creating CNI manager for ""
	I0513 17:35:59.964753   36897 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:35:59.968205   36897 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0513 17:35:59.974333   36897 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0513 17:35:59.977340   36897 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0513 17:35:59.982576   36897 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0513 17:35:59.982627   36897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 17:35:59.982636   36897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-056000 minikube.k8s.io/updated_at=2024_05_13T17_35_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=running-upgrade-056000 minikube.k8s.io/primary=true
	I0513 17:36:00.020118   36897 kubeadm.go:1107] duration metric: took 37.53225ms to wait for elevateKubeSystemPrivileges
	I0513 17:36:00.020135   36897 ops.go:34] apiserver oom_adj: -16
	W0513 17:36:00.024752   36897 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0513 17:36:00.024761   36897 kubeadm.go:393] duration metric: took 4m22.478784792s to StartCluster
	I0513 17:36:00.024771   36897 settings.go:142] acquiring lock: {Name:mk9ef358ebdddf34ee47447e0095ef8dc921e138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:36:00.024916   36897 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:36:00.025307   36897 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/kubeconfig: {Name:mk4053cf25e56f4e4112583f80c31fb87a4c6322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:36:00.025481   36897 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:36:00.030305   36897 out.go:177] * Verifying Kubernetes components...
	I0513 17:36:00.025501   36897 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0513 17:36:00.025593   36897 config.go:182] Loaded profile config "running-upgrade-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:36:00.038252   36897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:36:00.038281   36897 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-056000"
	I0513 17:36:00.038296   36897 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-056000"
	I0513 17:36:00.038311   36897 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-056000"
	I0513 17:36:00.038325   36897 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-056000"
	W0513 17:36:00.038329   36897 addons.go:243] addon storage-provisioner should already be in state true
	I0513 17:36:00.038338   36897 host.go:66] Checking if "running-upgrade-056000" exists ...
	I0513 17:36:00.039386   36897 kapi.go:59] client config for running-upgrade-056000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/client.key", CAFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b8de10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 17:36:00.039752   36897 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-056000"
	W0513 17:36:00.039757   36897 addons.go:243] addon default-storageclass should already be in state true
	I0513 17:36:00.039764   36897 host.go:66] Checking if "running-upgrade-056000" exists ...
	I0513 17:36:00.043313   36897 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:36:00.047180   36897 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 17:36:00.047187   36897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0513 17:36:00.047194   36897 sshutil.go:53] new ssh client: &{IP:localhost Port:56093 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/running-upgrade-056000/id_rsa Username:docker}
	I0513 17:36:00.047969   36897 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0513 17:36:00.047974   36897 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0513 17:36:00.047978   36897 sshutil.go:53] new ssh client: &{IP:localhost Port:56093 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/running-upgrade-056000/id_rsa Username:docker}
	I0513 17:36:00.120401   36897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 17:36:00.127712   36897 api_server.go:52] waiting for apiserver process to appear ...
	I0513 17:36:00.127766   36897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:36:00.131876   36897 api_server.go:72] duration metric: took 106.386084ms to wait for apiserver process to appear ...
	I0513 17:36:00.131883   36897 api_server.go:88] waiting for apiserver healthz status ...
	I0513 17:36:00.131889   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:00.165320   36897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 17:36:00.189669   36897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0513 17:36:05.133869   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:05.133893   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:10.134057   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:10.134083   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:15.134734   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:15.134780   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:20.135248   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:20.135302   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:25.136002   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:25.136024   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:30.136740   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:30.136790   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0513 17:36:30.539603   36897 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0513 17:36:30.542042   36897 out.go:177] * Enabled addons: storage-provisioner
	I0513 17:36:30.552778   36897 addons.go:505] duration metric: took 30.527894708s for enable addons: enabled=[storage-provisioner]
	I0513 17:36:35.138206   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:35.138242   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:40.139019   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:40.139041   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:45.140624   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:45.140659   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:50.142736   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:50.142757   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:55.144599   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:55.144629   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:00.146729   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:00.146844   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:00.158826   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:00.158890   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:00.169167   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:00.169225   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:00.179666   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:00.179740   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:00.190310   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:00.190372   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:00.200804   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:00.200880   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:00.210827   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:00.210892   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:00.221436   36897 logs.go:276] 0 containers: []
	W0513 17:37:00.221447   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:00.221499   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:00.232465   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:00.232480   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:00.232485   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:00.267911   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:00.267923   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:00.282593   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:00.282604   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:00.299024   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:00.299037   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:00.310661   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:00.310674   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:00.322366   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:00.322378   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:00.340377   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:00.340387   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:00.364404   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:00.364410   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:00.368592   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:00.368597   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:00.379549   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:00.379560   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:00.391253   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:00.391263   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:00.406101   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:00.406109   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:00.418179   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:00.418194   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:02.955137   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:07.957365   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:07.957542   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:07.973307   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:07.973393   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:07.985328   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:07.985393   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:07.996409   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:07.996480   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:08.007050   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:08.007117   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:08.017273   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:08.017346   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:08.027336   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:08.027397   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:08.038179   36897 logs.go:276] 0 containers: []
	W0513 17:37:08.038190   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:08.038242   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:08.049017   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:08.049031   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:08.049037   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:08.060613   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:08.060624   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:08.072143   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:08.072153   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:08.095502   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:08.095513   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:08.106745   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:08.106756   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:08.142212   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:08.142223   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:08.180024   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:08.180035   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:08.194005   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:08.194016   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:08.212638   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:08.212647   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:08.225191   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:08.225202   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:08.248539   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:08.248549   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:08.266344   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:08.266354   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:08.277867   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:08.277877   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:10.783511   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:15.786045   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:15.786205   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:15.809377   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:15.809443   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:15.819640   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:15.819709   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:15.830240   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:15.830304   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:15.844693   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:15.844763   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:15.860529   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:15.860595   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:15.871006   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:15.871074   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:15.881410   36897 logs.go:276] 0 containers: []
	W0513 17:37:15.881421   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:15.881478   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:15.891967   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:15.891982   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:15.891989   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:15.926360   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:15.926372   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:15.940735   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:15.940749   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:15.952408   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:15.952418   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:15.976675   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:15.976683   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:15.988018   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:15.988031   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:16.021824   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:16.021836   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:16.026876   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:16.026885   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:16.040664   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:16.040675   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:16.052233   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:16.052245   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:16.067100   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:16.067110   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:16.079068   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:16.079078   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:16.096573   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:16.096583   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:18.610034   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:23.612627   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:23.612847   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:23.638820   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:23.638918   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:23.654933   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:23.655011   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:23.667198   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:23.667265   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:23.678164   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:23.678236   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:23.688417   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:23.688479   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:23.698887   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:23.698949   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:23.714013   36897 logs.go:276] 0 containers: []
	W0513 17:37:23.714023   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:23.714075   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:23.724680   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:23.724695   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:23.724701   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:23.737389   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:23.737400   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:23.754480   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:23.754493   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:23.765973   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:23.765984   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:23.778127   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:23.778138   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:23.813112   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:23.813125   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:23.817880   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:23.817887   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:23.829486   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:23.829496   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:23.844755   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:23.844765   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:23.867914   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:23.867924   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:23.901444   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:23.901454   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:23.916578   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:23.916590   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:23.930371   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:23.930383   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:26.443818   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:31.445969   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:31.446081   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:31.461483   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:31.461576   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:31.471973   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:31.472043   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:31.482328   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:31.484213   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:31.494418   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:31.494484   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:31.508687   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:31.508756   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:31.519043   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:31.519110   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:31.529462   36897 logs.go:276] 0 containers: []
	W0513 17:37:31.529472   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:31.529523   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:31.539364   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:31.539379   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:31.539384   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:31.551799   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:31.551811   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:31.563638   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:31.563648   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:31.580675   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:31.580685   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:31.593290   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:31.593300   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:31.607888   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:31.607896   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:31.621586   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:31.621598   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:31.632966   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:31.632975   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:31.649354   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:31.649368   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:31.664009   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:31.664019   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:31.698736   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:31.698743   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:31.703409   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:31.703416   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:31.739227   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:31.739239   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:34.266064   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:39.268286   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:39.268397   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:39.279528   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:39.279604   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:39.294957   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:39.295021   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:39.304881   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:39.304943   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:39.315890   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:39.315956   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:39.326117   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:39.326183   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:39.337478   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:39.337543   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:39.351938   36897 logs.go:276] 0 containers: []
	W0513 17:37:39.351952   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:39.352007   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:39.362387   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:39.362401   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:39.362407   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:39.397742   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:39.397752   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:39.402647   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:39.402657   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:39.473037   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:39.473050   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:39.487478   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:39.487489   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:39.499165   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:39.499176   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:39.513238   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:39.513248   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:39.533294   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:39.533304   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:39.551115   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:39.551125   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:39.562613   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:39.562628   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:39.586671   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:39.586683   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:39.598130   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:39.598145   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:39.613232   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:39.613242   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:42.126869   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:47.129200   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:47.129470   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:47.162834   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:47.162958   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:47.180856   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:47.180928   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:47.194539   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:47.194618   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:47.210789   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:47.210865   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:47.221474   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:47.221544   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:47.232283   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:47.232350   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:47.242429   36897 logs.go:276] 0 containers: []
	W0513 17:37:47.242446   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:47.242528   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:47.253064   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:47.253079   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:47.253084   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:47.286575   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:47.286585   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:47.290726   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:47.290736   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:47.327465   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:47.327477   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:47.340959   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:47.340969   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:47.352603   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:47.352616   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:47.375467   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:47.375475   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:47.387375   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:47.387386   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:47.401828   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:47.401838   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:47.415606   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:47.415617   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:47.429544   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:47.429554   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:47.444109   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:47.444121   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:47.464493   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:47.464502   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:49.990522   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:54.992760   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:54.992979   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:55.013391   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:55.013486   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:55.028055   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:55.028120   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:55.041089   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:55.041173   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:55.052065   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:55.052126   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:55.062437   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:55.062511   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:55.082065   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:55.082137   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:55.092562   36897 logs.go:276] 0 containers: []
	W0513 17:37:55.092573   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:55.092627   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:55.103075   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:55.103092   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:55.103098   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:55.137161   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:55.137180   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:55.141640   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:55.141649   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:55.152910   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:55.152921   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:55.177857   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:55.177868   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:55.189587   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:55.189598   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:55.200996   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:55.201009   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:55.218814   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:55.218828   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:55.256694   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:55.256705   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:55.271283   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:55.271293   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:55.285272   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:55.285282   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:55.297676   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:55.297686   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:55.310231   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:55.310242   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:57.827086   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:02.829228   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:02.829431   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:02.850216   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:02.850312   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:02.864693   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:02.864761   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:02.877722   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:02.877802   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:02.888467   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:02.888536   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:02.898498   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:02.898565   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:02.909670   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:02.909739   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:02.919442   36897 logs.go:276] 0 containers: []
	W0513 17:38:02.919453   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:02.919508   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:02.932643   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:02.932660   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:02.932670   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:02.966373   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:02.966386   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:02.971426   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:02.971433   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:02.985539   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:02.985548   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:02.999861   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:02.999875   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:03.011396   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:03.011408   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:03.034603   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:03.034611   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:03.069909   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:03.069923   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:03.081324   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:03.081334   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:03.096449   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:03.096460   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:03.117758   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:03.117773   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:03.129801   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:03.129812   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:03.142422   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:03.142433   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:03.154289   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:03.154302   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:03.166236   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:03.166249   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:05.680112   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:10.682399   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:10.682697   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:10.716326   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:10.716455   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:10.735351   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:10.735467   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:10.750393   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:10.750469   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:10.767587   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:10.767659   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:10.778414   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:10.778486   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:10.789612   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:10.789683   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:10.799486   36897 logs.go:276] 0 containers: []
	W0513 17:38:10.799496   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:10.799550   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:10.809903   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:10.809920   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:10.809926   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:10.815021   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:10.815030   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:10.826128   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:10.826140   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:10.838680   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:10.838691   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:10.853331   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:10.853341   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:10.870699   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:10.870709   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:10.906516   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:10.906524   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:10.951785   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:10.951800   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:10.965822   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:10.965834   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:10.984843   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:10.984853   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:10.997948   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:10.997958   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:11.011158   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:11.011169   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:11.035526   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:11.035536   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:11.046553   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:11.046564   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:11.062136   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:11.062148   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:13.576650   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:18.578871   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:18.579116   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:18.605556   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:18.605660   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:18.623558   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:18.623651   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:18.637944   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:18.638021   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:18.649141   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:18.649208   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:18.659750   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:18.659812   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:18.669913   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:18.669982   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:18.680044   36897 logs.go:276] 0 containers: []
	W0513 17:38:18.680055   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:18.680112   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:18.690469   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:18.690487   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:18.690492   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:18.701793   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:18.701803   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:18.737096   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:18.737111   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:18.777935   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:18.777949   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:18.797152   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:18.797163   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:18.814288   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:18.814298   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:18.840126   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:18.840134   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:18.854174   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:18.854186   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:18.873188   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:18.873200   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:18.884203   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:18.884214   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:18.895807   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:18.895821   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:18.900183   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:18.900192   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:18.919388   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:18.919400   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:18.931111   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:18.931122   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:18.942788   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:18.942798   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:21.463788   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:26.464051   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:26.464228   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:26.475534   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:26.475609   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:26.486276   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:26.486345   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:26.496880   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:26.496946   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:26.508610   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:26.508681   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:26.518833   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:26.518899   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:26.534881   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:26.534952   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:26.545367   36897 logs.go:276] 0 containers: []
	W0513 17:38:26.545377   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:26.545429   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:26.556110   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:26.556125   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:26.556131   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:26.567429   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:26.567439   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:26.581979   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:26.581991   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:26.596490   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:26.596502   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:26.620351   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:26.620360   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:26.655388   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:26.655399   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:26.670552   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:26.670564   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:26.682174   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:26.682184   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:26.717173   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:26.717182   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:26.740677   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:26.740689   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:26.758403   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:26.758414   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:26.776734   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:26.776745   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:26.793848   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:26.793857   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:26.805824   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:26.805836   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:26.811081   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:26.811087   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:29.324820   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:34.327126   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:34.327326   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:34.352612   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:34.352685   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:34.363862   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:34.363929   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:34.374926   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:34.375002   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:34.385525   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:34.385594   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:34.395869   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:34.395941   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:34.406552   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:34.406621   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:34.421221   36897 logs.go:276] 0 containers: []
	W0513 17:38:34.421232   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:34.421289   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:34.431469   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:34.431485   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:34.431492   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:34.435943   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:34.435953   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:34.470138   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:34.470151   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:34.488212   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:34.488226   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:34.521058   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:34.521067   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:34.532293   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:34.532304   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:34.543772   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:34.543784   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:34.555345   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:34.555358   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:34.566762   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:34.566773   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:34.579205   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:34.579217   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:34.593244   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:34.593255   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:34.607714   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:34.607724   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:34.618781   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:34.618790   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:34.636357   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:34.636368   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:34.661177   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:34.661187   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:37.175296   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:42.177499   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:42.177718   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:42.199428   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:42.199516   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:42.215044   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:42.215118   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:42.227570   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:42.227636   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:42.238256   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:42.238320   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:42.257021   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:42.257086   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:42.267484   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:42.267548   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:42.278221   36897 logs.go:276] 0 containers: []
	W0513 17:38:42.278232   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:42.278288   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:42.289598   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:42.289613   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:42.289619   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:42.296709   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:42.296717   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:42.313478   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:42.313486   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:42.338280   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:42.338288   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:42.373269   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:42.373281   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:42.387344   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:42.387354   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:42.402396   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:42.402406   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:42.416824   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:42.416834   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:42.428577   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:42.428588   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:42.463464   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:42.463474   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:42.487021   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:42.487030   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:42.498857   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:42.498865   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:42.510835   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:42.510849   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:42.522591   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:42.522602   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:42.535201   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:42.535214   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:45.049201   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:50.051487   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:50.051624   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:50.067661   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:50.067730   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:50.077821   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:50.077876   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:50.088843   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:50.088913   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:50.099234   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:50.099303   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:50.109940   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:50.110005   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:50.120741   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:50.120805   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:50.131204   36897 logs.go:276] 0 containers: []
	W0513 17:38:50.131215   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:50.131262   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:50.141703   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:50.141719   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:50.141725   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:50.154059   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:50.154069   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:50.177758   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:50.177769   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:50.201588   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:50.201597   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:50.237165   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:50.237179   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:50.256059   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:50.256069   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:50.267727   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:50.267739   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:50.279431   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:50.279443   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:50.294613   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:50.294628   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:50.329371   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:50.329379   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:50.343673   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:50.343683   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:50.355111   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:50.355121   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:50.359755   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:50.359764   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:50.371118   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:50.371128   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:50.383070   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:50.383080   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:52.896434   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:57.897756   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:57.897848   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:57.909320   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:57.909390   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:57.920130   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:57.920197   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:57.931770   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:57.931838   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:57.942220   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:57.942284   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:57.954366   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:57.954436   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:57.965815   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:57.965881   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:57.976187   36897 logs.go:276] 0 containers: []
	W0513 17:38:57.976198   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:57.976253   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:57.987219   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:57.987240   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:57.987246   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:58.002232   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:58.002242   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:58.014705   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:58.014717   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:58.026846   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:58.026857   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:58.041277   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:58.041291   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:58.053041   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:58.053050   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:58.072769   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:58.072780   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:58.109484   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:58.109499   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:58.114352   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:58.114360   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:58.151545   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:58.151557   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:58.169526   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:58.169536   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:58.199514   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:58.199526   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:58.212182   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:58.212193   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:58.238131   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:58.238144   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:58.253400   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:58.253411   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:00.767663   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:05.769810   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:05.770051   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:05.792829   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:05.792946   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:05.808706   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:05.808780   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:05.820335   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:39:05.820410   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:05.830720   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:05.830786   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:05.841043   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:05.841115   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:05.851187   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:05.851254   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:05.861372   36897 logs.go:276] 0 containers: []
	W0513 17:39:05.861383   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:05.861447   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:05.879832   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:05.879851   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:05.879856   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:05.895607   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:05.895620   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:05.930004   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:05.930018   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:05.942697   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:39:05.942709   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:39:05.954630   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:05.954640   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:05.972498   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:05.972512   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:05.989487   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:05.989499   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:06.002294   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:06.002304   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:06.028583   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:06.028598   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:06.042979   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:39:06.042992   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:39:06.054763   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:06.054778   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:06.070089   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:06.070102   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:06.085416   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:06.085426   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:06.119066   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:06.119077   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:06.123510   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:06.123516   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:08.636549   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:13.638726   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:13.638845   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:13.650451   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:13.650520   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:13.661006   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:13.661080   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:13.671620   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:39:13.671691   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:13.689120   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:13.689183   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:13.699462   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:13.699528   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:13.710051   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:13.710112   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:13.720345   36897 logs.go:276] 0 containers: []
	W0513 17:39:13.720360   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:13.720414   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:13.731024   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:13.731042   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:13.731048   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:13.742710   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:39:13.742720   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:39:13.754536   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:13.754547   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:13.766216   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:13.766227   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:13.801758   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:13.801771   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:13.806982   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:13.806990   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:13.824430   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:13.824443   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:13.859004   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:13.859015   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:13.870944   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:13.870959   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:13.883369   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:13.883383   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:13.897648   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:13.897661   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:13.922303   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:13.922318   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:13.949934   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:13.949943   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:13.961642   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:39:13.961656   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:39:13.973457   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:13.973469   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:16.493113   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:21.495254   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:21.495482   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:21.518913   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:21.519019   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:21.534667   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:21.534741   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:21.547087   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:39:21.547151   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:21.560371   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:21.560438   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:21.571469   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:21.571532   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:21.582040   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:21.582102   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:21.592266   36897 logs.go:276] 0 containers: []
	W0513 17:39:21.592276   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:21.592331   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:21.603263   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:21.603278   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:21.603286   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:21.620799   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:21.620811   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:21.632729   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:21.632742   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:21.637296   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:21.637304   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:21.651713   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:39:21.651722   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:39:21.663400   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:21.663412   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:21.674912   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:21.674923   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:21.689237   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:21.689251   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:21.700573   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:21.700586   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:21.711827   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:39:21.711840   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:39:21.723654   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:21.723668   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:21.735502   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:21.735517   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:21.759882   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:21.759891   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:21.794275   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:21.794283   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:21.830686   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:21.830697   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:24.347870   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:29.349993   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:29.350107   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:29.362695   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:29.362772   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:29.374596   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:29.374667   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:29.385807   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:39:29.385882   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:29.396108   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:29.396174   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:29.406166   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:29.406228   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:29.416795   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:29.416867   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:29.426997   36897 logs.go:276] 0 containers: []
	W0513 17:39:29.427007   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:29.427058   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:29.437401   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:29.437420   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:29.437426   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:29.452160   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:29.452170   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:29.463984   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:29.463995   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:29.477528   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:29.477539   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:29.494810   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:29.494820   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:29.506500   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:29.506510   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:29.518581   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:29.518591   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:29.523450   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:29.523456   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:29.557819   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:29.557833   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:29.577278   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:29.577288   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:29.592145   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:29.592159   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:29.604706   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:29.604716   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:29.638488   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:39:29.638497   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:39:29.650379   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:39:29.650392   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:39:29.662446   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:29.662459   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:32.187614   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:37.189692   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:37.189795   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:37.201909   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:37.201980   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:37.212830   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:37.212895   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:37.223842   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:39:37.223923   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:37.234491   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:37.234558   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:37.245283   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:37.245346   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:37.255607   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:37.255675   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:37.266363   36897 logs.go:276] 0 containers: []
	W0513 17:39:37.266379   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:37.266429   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:37.277085   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:37.277101   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:37.277106   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:37.288600   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:37.288612   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:37.323633   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:37.323645   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:37.357684   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:37.357697   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:37.369649   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:39:37.369661   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:39:37.381378   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:37.381390   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:37.396186   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:37.396198   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:37.410745   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:37.410758   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:37.422468   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:39:37.422482   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:39:37.434025   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:37.434037   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:37.451276   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:37.451288   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:37.462915   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:37.462925   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:37.467605   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:37.467611   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:37.481908   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:37.481919   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:37.505769   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:37.505780   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:40.019890   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:45.022276   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:45.022555   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:45.050384   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:45.050501   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:45.072791   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:45.072865   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:45.088641   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:39:45.088710   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:45.100187   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:45.100247   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:45.110456   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:45.110520   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:45.121211   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:45.121271   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:45.131705   36897 logs.go:276] 0 containers: []
	W0513 17:39:45.131717   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:45.131768   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:45.142174   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:45.142188   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:45.142194   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:45.178384   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:45.178398   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:45.192601   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:39:45.192612   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:39:45.204038   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:45.204050   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:45.222681   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:45.222691   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:45.235649   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:45.235663   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:45.250139   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:45.250152   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:45.262761   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:45.262774   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:45.274559   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:45.274569   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:45.289577   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:45.289591   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:45.316155   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:45.316166   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:45.328451   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:45.328462   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:45.363220   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:45.363231   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:45.367695   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:39:45.367702   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:39:45.380075   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:45.380089   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:47.893899   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:52.896094   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:52.896354   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:52.917542   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:52.917667   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:52.933936   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:52.934009   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:52.946164   36897 logs.go:276] 4 containers: [36bcfdf0b842 0f4c32511b6a c4d76732fd6b c87aaf9c9388]
	I0513 17:39:52.946234   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:52.958464   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:52.958537   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:52.975031   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:52.975087   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:52.985463   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:52.985536   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:52.995316   36897 logs.go:276] 0 containers: []
	W0513 17:39:52.995327   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:52.995384   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:53.005828   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:53.005843   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:53.005849   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:53.019820   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:53.019833   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:53.042392   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:53.042400   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:53.053845   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:53.053858   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:53.073106   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:53.073120   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:53.085055   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:53.085066   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:53.102430   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:53.102442   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:53.114488   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:53.114498   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:53.130359   36897 logs.go:123] Gathering logs for coredns [0f4c32511b6a] ...
	I0513 17:39:53.130371   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4c32511b6a"
	I0513 17:39:53.148162   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:53.148173   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:53.163111   36897 logs.go:123] Gathering logs for coredns [36bcfdf0b842] ...
	I0513 17:39:53.163123   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36bcfdf0b842"
	I0513 17:39:53.175324   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:53.175338   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:53.187743   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:53.187752   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:53.222592   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:53.222601   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:53.227297   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:53.227303   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:55.764986   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:40:00.767102   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:40:00.771515   36897 out.go:177] 
	W0513 17:40:00.775520   36897 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0513 17:40:00.775527   36897 out.go:239] * 
	* 
	W0513 17:40:00.776071   36897 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:40:00.790370   36897 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-056000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-05-13 17:40:00.879113 -0700 PDT m=+1306.861048460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-056000 -n running-upgrade-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-056000 -n running-upgrade-056000: exit status 2 (15.733650333s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-056000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-448000          | force-systemd-flag-448000 | jenkins | v1.33.1 | 13 May 24 17:29 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-090000              | force-systemd-env-090000  | jenkins | v1.33.1 | 13 May 24 17:29 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-090000           | force-systemd-env-090000  | jenkins | v1.33.1 | 13 May 24 17:29 PDT | 13 May 24 17:29 PDT |
	| start   | -p docker-flags-887000                | docker-flags-887000       | jenkins | v1.33.1 | 13 May 24 17:29 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-448000             | force-systemd-flag-448000 | jenkins | v1.33.1 | 13 May 24 17:30 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-448000          | force-systemd-flag-448000 | jenkins | v1.33.1 | 13 May 24 17:30 PDT | 13 May 24 17:30 PDT |
	| start   | -p cert-expiration-880000             | cert-expiration-880000    | jenkins | v1.33.1 | 13 May 24 17:30 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-887000 ssh               | docker-flags-887000       | jenkins | v1.33.1 | 13 May 24 17:30 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-887000 ssh               | docker-flags-887000       | jenkins | v1.33.1 | 13 May 24 17:30 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-887000                | docker-flags-887000       | jenkins | v1.33.1 | 13 May 24 17:30 PDT | 13 May 24 17:30 PDT |
	| start   | -p cert-options-398000                | cert-options-398000       | jenkins | v1.33.1 | 13 May 24 17:30 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-398000 ssh               | cert-options-398000       | jenkins | v1.33.1 | 13 May 24 17:30 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-398000 -- sudo        | cert-options-398000       | jenkins | v1.33.1 | 13 May 24 17:30 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-398000                | cert-options-398000       | jenkins | v1.33.1 | 13 May 24 17:30 PDT | 13 May 24 17:30 PDT |
	| start   | -p running-upgrade-056000             | minikube                  | jenkins | v1.26.0 | 13 May 24 17:30 PDT | 13 May 24 17:31 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-056000             | running-upgrade-056000    | jenkins | v1.33.1 | 13 May 24 17:31 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-880000             | cert-expiration-880000    | jenkins | v1.33.1 | 13 May 24 17:33 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-880000             | cert-expiration-880000    | jenkins | v1.33.1 | 13 May 24 17:33 PDT | 13 May 24 17:33 PDT |
	| start   | -p kubernetes-upgrade-549000          | kubernetes-upgrade-549000 | jenkins | v1.33.1 | 13 May 24 17:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-549000          | kubernetes-upgrade-549000 | jenkins | v1.33.1 | 13 May 24 17:33 PDT | 13 May 24 17:33 PDT |
	| start   | -p kubernetes-upgrade-549000          | kubernetes-upgrade-549000 | jenkins | v1.33.1 | 13 May 24 17:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-549000          | kubernetes-upgrade-549000 | jenkins | v1.33.1 | 13 May 24 17:33 PDT | 13 May 24 17:33 PDT |
	| start   | -p stopped-upgrade-201000             | minikube                  | jenkins | v1.26.0 | 13 May 24 17:33 PDT | 13 May 24 17:34 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-201000 stop           | minikube                  | jenkins | v1.26.0 | 13 May 24 17:34 PDT | 13 May 24 17:34 PDT |
	| start   | -p stopped-upgrade-201000             | stopped-upgrade-201000    | jenkins | v1.33.1 | 13 May 24 17:34 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 17:34:21
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 17:34:21.140929   37047 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:34:21.141091   37047 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:34:21.141095   37047 out.go:304] Setting ErrFile to fd 2...
	I0513 17:34:21.141098   37047 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:34:21.141239   37047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:34:21.142342   37047 out.go:298] Setting JSON to false
	I0513 17:34:21.160736   37047 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27231,"bootTime":1715619630,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:34:21.160799   37047 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:34:21.166019   37047 out.go:177] * [stopped-upgrade-201000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:34:21.173847   37047 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:34:21.178034   37047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:34:21.173880   37047 notify.go:220] Checking for updates...
	I0513 17:34:21.184012   37047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:34:21.186987   37047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:34:21.190014   37047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:34:21.192951   37047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:34:21.196281   37047 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:34:21.200042   37047 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0513 17:34:21.201335   37047 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:34:21.206030   37047 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:34:21.212855   37047 start.go:297] selected driver: qemu2
	I0513 17:34:21.212863   37047 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56308 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-201000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0513 17:34:21.212931   37047 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:34:21.215571   37047 cni.go:84] Creating CNI manager for ""
	I0513 17:34:21.215597   37047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:34:21.215630   37047 start.go:340] cluster config:
	{Name:stopped-upgrade-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56308 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-201000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0513 17:34:21.215703   37047 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:34:21.221949   37047 out.go:177] * Starting "stopped-upgrade-201000" primary control-plane node in "stopped-upgrade-201000" cluster
	I0513 17:34:21.226015   37047 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0513 17:34:21.226031   37047 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0513 17:34:21.226039   37047 cache.go:56] Caching tarball of preloaded images
	I0513 17:34:21.226092   37047 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:34:21.226097   37047 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0513 17:34:21.226141   37047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/config.json ...
	I0513 17:34:21.226578   37047 start.go:360] acquireMachinesLock for stopped-upgrade-201000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:34:21.226614   37047 start.go:364] duration metric: took 29.666µs to acquireMachinesLock for "stopped-upgrade-201000"
	I0513 17:34:21.226625   37047 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:34:21.226629   37047 fix.go:54] fixHost starting: 
	I0513 17:34:21.226738   37047 fix.go:112] recreateIfNeeded on stopped-upgrade-201000: state=Stopped err=<nil>
	W0513 17:34:21.226746   37047 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:34:21.231942   37047 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-201000" ...
	I0513 17:34:17.312186   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:17.312278   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:17.326345   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:17.326423   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:17.337113   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:17.337183   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:17.347928   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:17.347991   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:17.358014   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:17.358085   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:17.369127   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:17.369203   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:17.381308   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:17.381403   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:17.394159   36897 logs.go:276] 0 containers: []
	W0513 17:34:17.394172   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:17.394254   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:17.405655   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:17.405673   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:17.405678   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:17.424373   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:17.424386   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:17.440136   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:17.440157   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:17.454102   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:17.454116   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:17.467059   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:17.467072   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:17.479657   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:17.479677   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:17.506197   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:17.506217   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:17.549431   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:17.549447   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:17.565862   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:17.565875   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:17.579146   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:17.579160   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:17.591064   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:17.591077   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:17.627670   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:17.627683   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:17.645636   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:17.645650   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:17.657488   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:17.657506   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:17.670351   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:17.670364   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:17.675335   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:17.675352   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:17.691239   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:17.691253   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:20.207336   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:21.237684   37047 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/qemu.pid -nic user,model=virtio,hostfwd=tcp::56273-:22,hostfwd=tcp::56274-:2376,hostname=stopped-upgrade-201000 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/disk.qcow2
	I0513 17:34:21.281864   37047 main.go:141] libmachine: STDOUT: 
	I0513 17:34:21.281896   37047 main.go:141] libmachine: STDERR: 
	I0513 17:34:21.281901   37047 main.go:141] libmachine: Waiting for VM to start (ssh -p 56273 docker@127.0.0.1)...
	I0513 17:34:25.209656   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:25.209875   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:25.228036   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:25.228131   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:25.242192   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:25.242262   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:25.258435   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:25.258504   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:25.268799   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:25.268858   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:25.279719   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:25.279785   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:25.295186   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:25.295249   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:25.304772   36897 logs.go:276] 0 containers: []
	W0513 17:34:25.304782   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:25.304831   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:25.315015   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:25.315033   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:25.315038   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:25.352877   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:25.352884   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:25.387080   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:25.387093   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:25.400706   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:25.400719   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:25.413333   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:25.413346   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:25.431036   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:25.431049   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:25.442830   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:25.442843   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:25.454148   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:25.454159   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:25.468971   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:25.468983   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:25.492011   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:25.492020   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:25.502839   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:25.502850   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:25.518345   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:25.518357   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:25.522690   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:25.522698   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:25.536509   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:25.536518   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:25.550880   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:25.550889   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:25.562800   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:25.562811   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:25.574159   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:25.574170   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:28.095373   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:33.097565   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:33.097680   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:33.110936   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:33.111014   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:33.123906   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:33.123981   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:33.137083   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:33.137154   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:33.149938   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:33.150013   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:33.162432   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:33.162504   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:33.177342   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:33.177409   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:33.194252   36897 logs.go:276] 0 containers: []
	W0513 17:34:33.194264   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:33.194329   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:33.206508   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:33.206527   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:33.206533   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:33.222559   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:33.222572   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:33.236077   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:33.236090   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:33.250396   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:33.250410   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:33.269928   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:33.269940   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:33.285653   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:33.285666   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:33.323364   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:33.323377   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:33.328255   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:33.328265   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:33.341061   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:33.341074   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:33.355226   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:33.355236   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:33.367409   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:33.367421   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:33.384343   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:33.384357   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:33.428080   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:33.428097   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:33.444784   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:33.444796   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:33.456392   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:33.456407   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:33.469593   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:33.469606   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:33.496166   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:33.496184   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:36.013631   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:41.015850   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:41.016361   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:41.054490   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:41.054627   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:41.075230   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:41.075335   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:41.090717   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:41.090799   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:41.106418   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:41.106500   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:41.117660   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:41.117727   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:41.128427   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:41.128498   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:41.138339   36897 logs.go:276] 0 containers: []
	W0513 17:34:41.138350   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:41.138406   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:41.149188   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:41.149206   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:41.149212   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:41.187982   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:41.187992   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:41.199221   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:41.199236   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:41.213199   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:41.213210   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:41.228004   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:41.228014   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:41.250537   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:41.250548   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:41.286047   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:41.286061   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:41.305283   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:41.305295   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:41.317273   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:41.317284   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:41.335128   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:41.335140   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:41.347084   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:41.347097   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:41.351589   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:41.351599   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:41.365304   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:41.365315   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:41.377433   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:41.377443   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:41.389400   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:41.389410   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:41.405282   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:41.405293   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:41.417068   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:41.417079   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:41.922922   37047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/config.json ...
	I0513 17:34:41.923746   37047 machine.go:94] provisionDockerMachine start ...
	I0513 17:34:41.923949   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:41.924525   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:41.924542   37047 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 17:34:42.011819   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0513 17:34:42.011853   37047 buildroot.go:166] provisioning hostname "stopped-upgrade-201000"
	I0513 17:34:42.011967   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:42.012215   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:42.012225   37047 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-201000 && echo "stopped-upgrade-201000" | sudo tee /etc/hostname
	I0513 17:34:42.088669   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-201000
	
	I0513 17:34:42.088734   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:42.088878   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:42.088889   37047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-201000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-201000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-201000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 17:34:42.156787   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 17:34:42.156800   37047 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18872-34554/.minikube CaCertPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18872-34554/.minikube}
	I0513 17:34:42.156808   37047 buildroot.go:174] setting up certificates
	I0513 17:34:42.156818   37047 provision.go:84] configureAuth start
	I0513 17:34:42.156822   37047 provision.go:143] copyHostCerts
	I0513 17:34:42.156906   37047 exec_runner.go:144] found /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.pem, removing ...
	I0513 17:34:42.156912   37047 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.pem
	I0513 17:34:42.157037   37047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.pem (1082 bytes)
	I0513 17:34:42.157222   37047 exec_runner.go:144] found /Users/jenkins/minikube-integration/18872-34554/.minikube/cert.pem, removing ...
	I0513 17:34:42.157227   37047 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18872-34554/.minikube/cert.pem
	I0513 17:34:42.157273   37047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18872-34554/.minikube/cert.pem (1123 bytes)
	I0513 17:34:42.157393   37047 exec_runner.go:144] found /Users/jenkins/minikube-integration/18872-34554/.minikube/key.pem, removing ...
	I0513 17:34:42.157396   37047 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18872-34554/.minikube/key.pem
	I0513 17:34:42.157439   37047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18872-34554/.minikube/key.pem (1675 bytes)
	I0513 17:34:42.157533   37047 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-201000 san=[127.0.0.1 localhost minikube stopped-upgrade-201000]
	I0513 17:34:42.320293   37047 provision.go:177] copyRemoteCerts
	I0513 17:34:42.320338   37047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 17:34:42.320348   37047 sshutil.go:53] new ssh client: &{IP:localhost Port:56273 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/id_rsa Username:docker}
	I0513 17:34:42.356770   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0513 17:34:42.363712   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0513 17:34:42.370399   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0513 17:34:42.377184   37047 provision.go:87] duration metric: took 220.365625ms to configureAuth
	I0513 17:34:42.377194   37047 buildroot.go:189] setting minikube options for container-runtime
	I0513 17:34:42.377314   37047 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:34:42.377346   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:42.377433   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:42.377439   37047 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 17:34:42.441704   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 17:34:42.441712   37047 buildroot.go:70] root file system type: tmpfs
	I0513 17:34:42.441767   37047 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 17:34:42.441815   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:42.441913   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:42.441946   37047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 17:34:42.509707   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 17:34:42.509752   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:42.509859   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:42.509869   37047 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 17:34:42.863161   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0513 17:34:42.863175   37047 machine.go:97] duration metric: took 939.435917ms to provisionDockerMachine
	I0513 17:34:42.863182   37047 start.go:293] postStartSetup for "stopped-upgrade-201000" (driver="qemu2")
	I0513 17:34:42.863189   37047 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 17:34:42.863249   37047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 17:34:42.863258   37047 sshutil.go:53] new ssh client: &{IP:localhost Port:56273 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/id_rsa Username:docker}
	I0513 17:34:42.901682   37047 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 17:34:42.903301   37047 info.go:137] Remote host: Buildroot 2021.02.12
	I0513 17:34:42.903317   37047 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18872-34554/.minikube/addons for local assets ...
	I0513 17:34:42.903404   37047 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18872-34554/.minikube/files for local assets ...
	I0513 17:34:42.903522   37047 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18872-34554/.minikube/files/etc/ssl/certs/350552.pem -> 350552.pem in /etc/ssl/certs
	I0513 17:34:42.903645   37047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 17:34:42.906197   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/files/etc/ssl/certs/350552.pem --> /etc/ssl/certs/350552.pem (1708 bytes)
	I0513 17:34:42.913253   37047 start.go:296] duration metric: took 50.066667ms for postStartSetup
	I0513 17:34:42.913266   37047 fix.go:56] duration metric: took 21.687070542s for fixHost
	I0513 17:34:42.913297   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:42.913398   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:42.913405   37047 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0513 17:34:42.978591   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715646882.637791838
	
	I0513 17:34:42.978600   37047 fix.go:216] guest clock: 1715646882.637791838
	I0513 17:34:42.978605   37047 fix.go:229] Guest: 2024-05-13 17:34:42.637791838 -0700 PDT Remote: 2024-05-13 17:34:42.913268 -0700 PDT m=+21.798858084 (delta=-275.476162ms)
	I0513 17:34:42.978616   37047 fix.go:200] guest clock delta is within tolerance: -275.476162ms
	I0513 17:34:42.978619   37047 start.go:83] releasing machines lock for "stopped-upgrade-201000", held for 21.752434666s
	I0513 17:34:42.978693   37047 ssh_runner.go:195] Run: cat /version.json
	I0513 17:34:42.978698   37047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 17:34:42.978702   37047 sshutil.go:53] new ssh client: &{IP:localhost Port:56273 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/id_rsa Username:docker}
	I0513 17:34:42.978716   37047 sshutil.go:53] new ssh client: &{IP:localhost Port:56273 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/id_rsa Username:docker}
	W0513 17:34:42.979335   37047 sshutil.go:64] dial failure (will retry): dial tcp [::1]:56273: connect: connection refused
	I0513 17:34:42.979357   37047 retry.go:31] will retry after 203.248018ms: dial tcp [::1]:56273: connect: connection refused
	W0513 17:34:43.225598   37047 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0513 17:34:43.225683   37047 ssh_runner.go:195] Run: systemctl --version
	I0513 17:34:43.228490   37047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0513 17:34:43.231012   37047 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 17:34:43.231047   37047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0513 17:34:43.235091   37047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0513 17:34:43.241247   37047 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 17:34:43.241260   37047 start.go:494] detecting cgroup driver to use...
	I0513 17:34:43.241352   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 17:34:43.249803   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0513 17:34:43.253651   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 17:34:43.257179   37047 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 17:34:43.257207   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 17:34:43.260653   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 17:34:43.263780   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 17:34:43.266638   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 17:34:43.269702   37047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 17:34:43.272977   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 17:34:43.276296   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 17:34:43.279254   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 17:34:43.281988   37047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 17:34:43.285006   37047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 17:34:43.287790   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:34:43.357489   37047 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 17:34:43.367919   37047 start.go:494] detecting cgroup driver to use...
	I0513 17:34:43.367996   37047 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 17:34:43.373953   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 17:34:43.379035   37047 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 17:34:43.388316   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 17:34:43.392516   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 17:34:43.397048   37047 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0513 17:34:43.465167   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 17:34:43.470970   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 17:34:43.476988   37047 ssh_runner.go:195] Run: which cri-dockerd
	I0513 17:34:43.478522   37047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 17:34:43.481506   37047 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 17:34:43.486782   37047 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 17:34:43.574564   37047 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 17:34:43.648321   37047 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 17:34:43.648383   37047 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 17:34:43.653419   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:34:43.736880   37047 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 17:34:44.900359   37047 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163487916s)
	I0513 17:34:44.900415   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 17:34:44.905015   37047 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0513 17:34:44.910980   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 17:34:44.915748   37047 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 17:34:44.993564   37047 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 17:34:45.065120   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:34:45.150063   37047 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 17:34:45.156209   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 17:34:45.160863   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:34:45.231219   37047 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 17:34:45.269819   37047 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 17:34:45.269889   37047 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 17:34:45.272400   37047 start.go:562] Will wait 60s for crictl version
	I0513 17:34:45.272453   37047 ssh_runner.go:195] Run: which crictl
	I0513 17:34:45.273786   37047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 17:34:45.289293   37047 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0513 17:34:45.289357   37047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 17:34:45.306024   37047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 17:34:45.332581   37047 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0513 17:34:45.332713   37047 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0513 17:34:45.333916   37047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 17:34:45.337662   37047 kubeadm.go:877] updating cluster {Name:stopped-upgrade-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56308 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-201000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0513 17:34:45.337705   37047 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0513 17:34:45.337745   37047 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 17:34:45.348431   37047 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 17:34:45.348439   37047 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0513 17:34:45.348485   37047 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 17:34:45.352203   37047 ssh_runner.go:195] Run: which lz4
	I0513 17:34:45.353454   37047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0513 17:34:45.354612   37047 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0513 17:34:45.354621   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0513 17:34:46.016575   37047 docker.go:649] duration metric: took 663.164292ms to copy over tarball
	I0513 17:34:46.016648   37047 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0513 17:34:43.931236   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:47.174827   37047 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.158184667s)
	I0513 17:34:47.174840   37047 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0513 17:34:47.190205   37047 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 17:34:47.193640   37047 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0513 17:34:47.198812   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:34:47.259910   37047 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 17:34:48.839936   37047 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.580039791s)
	I0513 17:34:48.840040   37047 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 17:34:48.853176   37047 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 17:34:48.853187   37047 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0513 17:34:48.853193   37047 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0513 17:34:48.860016   37047 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:34:48.860043   37047 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:34:48.860020   37047 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:34:48.860120   37047 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0513 17:34:48.860128   37047 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:34:48.860153   37047 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:34:48.860204   37047 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:34:48.860252   37047 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0513 17:34:48.868163   37047 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:34:48.868270   37047 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0513 17:34:48.868358   37047 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:34:48.868610   37047 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:34:48.869268   37047 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:34:48.869324   37047 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:34:48.869353   37047 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:34:48.869322   37047 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0513 17:34:49.294912   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0513 17:34:49.306007   37047 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0513 17:34:49.306046   37047 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0513 17:34:49.306104   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0513 17:34:49.309996   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:34:49.316870   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0513 17:34:49.317695   37047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0513 17:34:49.328249   37047 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0513 17:34:49.328267   37047 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:34:49.328318   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:34:49.328332   37047 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0513 17:34:49.328348   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0513 17:34:49.338589   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:34:49.364183   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:34:49.365113   37047 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0513 17:34:49.365134   37047 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:34:49.365098   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0513 17:34:49.365171   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:34:49.381353   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:34:49.393119   37047 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0513 17:34:49.393147   37047 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:34:49.393209   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:34:49.413487   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0513 17:34:49.431775   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0513 17:34:49.434840   37047 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0513 17:34:49.434898   37047 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:34:49.434949   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:34:49.440386   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0513 17:34:49.444082   37047 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0513 17:34:49.444186   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:34:49.479697   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0513 17:34:49.493243   37047 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0513 17:34:49.493264   37047 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:34:49.493264   37047 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0513 17:34:49.493325   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:34:49.493326   37047 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0513 17:34:49.493350   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0513 17:34:49.523748   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0513 17:34:49.523868   37047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0513 17:34:49.548318   37047 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0513 17:34:49.548350   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0513 17:34:49.559211   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0513 17:34:49.559333   37047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0513 17:34:49.581635   37047 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0513 17:34:49.581666   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0513 17:34:49.621643   37047 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0513 17:34:49.621656   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0513 17:34:49.650469   37047 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0513 17:34:49.650495   37047 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0513 17:34:49.650500   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0513 17:34:49.658418   37047 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0513 17:34:49.658518   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:34:49.698681   37047 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0513 17:34:49.698705   37047 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0513 17:34:49.698711   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0513 17:34:49.698703   37047 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0513 17:34:49.698744   37047 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:34:49.698800   37047 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:34:49.853184   37047 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0513 17:34:49.853214   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0513 17:34:49.853327   37047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0513 17:34:49.854839   37047 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0513 17:34:49.854856   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0513 17:34:49.881623   37047 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0513 17:34:49.881636   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0513 17:34:50.124586   37047 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0513 17:34:50.124628   37047 cache_images.go:92] duration metric: took 1.271452958s to LoadCachedImages
	W0513 17:34:50.124674   37047 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0513 17:34:50.124680   37047 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0513 17:34:50.124731   37047 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-201000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-201000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 17:34:50.124802   37047 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0513 17:34:50.138900   37047 cni.go:84] Creating CNI manager for ""
	I0513 17:34:50.138912   37047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:34:50.138917   37047 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0513 17:34:50.138926   37047 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-201000 NodeName:stopped-upgrade-201000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0513 17:34:50.138995   37047 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-201000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0513 17:34:50.139047   37047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0513 17:34:50.142045   37047 binaries.go:44] Found k8s binaries, skipping transfer
	I0513 17:34:50.142074   37047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0513 17:34:50.145102   37047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0513 17:34:50.150154   37047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 17:34:50.155109   37047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0513 17:34:50.160236   37047 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0513 17:34:50.161358   37047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 17:34:50.165258   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:34:50.248584   37047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 17:34:50.255068   37047 certs.go:68] Setting up /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000 for IP: 10.0.2.15
	I0513 17:34:50.255079   37047 certs.go:194] generating shared ca certs ...
	I0513 17:34:50.255088   37047 certs.go:226] acquiring lock for ca certs: {Name:mk4bcf4fefcc4c80b8079c869e5ba8b057091109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:34:50.255244   37047 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.key
	I0513 17:34:50.255297   37047 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/proxy-client-ca.key
	I0513 17:34:50.255302   37047 certs.go:256] generating profile certs ...
	I0513 17:34:50.255384   37047 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/client.key
	I0513 17:34:50.255404   37047 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.key.968a75f6
	I0513 17:34:50.255415   37047 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.crt.968a75f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0513 17:34:50.371358   37047 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.crt.968a75f6 ...
	I0513 17:34:50.371370   37047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.crt.968a75f6: {Name:mk9cf29c2ea8736ae5d3a43c029c95bade14f03e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:34:50.371666   37047 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.key.968a75f6 ...
	I0513 17:34:50.371672   37047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.key.968a75f6: {Name:mkc10f4b7a2f9c8ff2776d724bc4cc0eb180933d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:34:50.371795   37047 certs.go:381] copying /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.crt.968a75f6 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.crt
	I0513 17:34:50.371938   37047 certs.go:385] copying /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.key.968a75f6 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.key
	I0513 17:34:50.372082   37047 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/proxy-client.key
	I0513 17:34:50.372215   37047 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/35055.pem (1338 bytes)
	W0513 17:34:50.372242   37047 certs.go:480] ignoring /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/35055_empty.pem, impossibly tiny 0 bytes
	I0513 17:34:50.372247   37047 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca-key.pem (1675 bytes)
	I0513 17:34:50.372266   37047 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem (1082 bytes)
	I0513 17:34:50.372289   37047 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem (1123 bytes)
	I0513 17:34:50.372306   37047 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/key.pem (1675 bytes)
	I0513 17:34:50.372345   37047 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/files/etc/ssl/certs/350552.pem (1708 bytes)
	I0513 17:34:50.372657   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 17:34:50.379734   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0513 17:34:50.387051   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 17:34:50.393553   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0513 17:34:50.401293   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0513 17:34:50.408741   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0513 17:34:50.416135   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 17:34:50.423981   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0513 17:34:50.431590   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/files/etc/ssl/certs/350552.pem --> /usr/share/ca-certificates/350552.pem (1708 bytes)
	I0513 17:34:50.438289   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 17:34:50.445294   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/35055.pem --> /usr/share/ca-certificates/35055.pem (1338 bytes)
	I0513 17:34:50.452419   37047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0513 17:34:50.457564   37047 ssh_runner.go:195] Run: openssl version
	I0513 17:34:50.459562   37047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 17:34:50.462414   37047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 17:34:50.463843   37047 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 14 00:31 /usr/share/ca-certificates/minikubeCA.pem
	I0513 17:34:50.463861   37047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 17:34:50.465497   37047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 17:34:50.468865   37047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/35055.pem && ln -fs /usr/share/ca-certificates/35055.pem /etc/ssl/certs/35055.pem"
	I0513 17:34:50.472202   37047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35055.pem
	I0513 17:34:50.473500   37047 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 14 00:19 /usr/share/ca-certificates/35055.pem
	I0513 17:34:50.473517   37047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35055.pem
	I0513 17:34:50.475324   37047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/35055.pem /etc/ssl/certs/51391683.0"
	I0513 17:34:50.478074   37047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/350552.pem && ln -fs /usr/share/ca-certificates/350552.pem /etc/ssl/certs/350552.pem"
	I0513 17:34:50.481471   37047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/350552.pem
	I0513 17:34:50.482924   37047 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 14 00:19 /usr/share/ca-certificates/350552.pem
	I0513 17:34:50.482956   37047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/350552.pem
	I0513 17:34:50.484665   37047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/350552.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 17:34:50.487718   37047 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 17:34:50.489149   37047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0513 17:34:50.491428   37047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0513 17:34:50.493346   37047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0513 17:34:50.495678   37047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0513 17:34:50.497492   37047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0513 17:34:50.499427   37047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0513 17:34:50.501358   37047 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56308 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-201000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0513 17:34:50.501433   37047 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 17:34:50.511790   37047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0513 17:34:50.514612   37047 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0513 17:34:50.514621   37047 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0513 17:34:50.514624   37047 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0513 17:34:50.514647   37047 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0513 17:34:50.517586   37047 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0513 17:34:50.517891   37047 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-201000" does not appear in /Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:34:50.517993   37047 kubeconfig.go:62] /Users/jenkins/minikube-integration/18872-34554/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-201000" cluster setting kubeconfig missing "stopped-upgrade-201000" context setting]
	I0513 17:34:50.518199   37047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/kubeconfig: {Name:mk4053cf25e56f4e4112583f80c31fb87a4c6322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:34:50.518640   37047 kapi.go:59] client config for stopped-upgrade-201000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/client.key", CAFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ca1e10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 17:34:50.518968   37047 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0513 17:34:50.521636   37047 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-201000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0513 17:34:50.521640   37047 kubeadm.go:1154] stopping kube-system containers ...
	I0513 17:34:50.521680   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 17:34:50.532361   37047 docker.go:483] Stopping containers: [47dfe97c593d 2f96dad126c2 c06366361f20 b3d353a21008 efba4f55cfe3 ae8a30a7a109 95d64d777ab1 addde02f95eb]
	I0513 17:34:50.532428   37047 ssh_runner.go:195] Run: docker stop 47dfe97c593d 2f96dad126c2 c06366361f20 b3d353a21008 efba4f55cfe3 ae8a30a7a109 95d64d777ab1 addde02f95eb
	I0513 17:34:50.543011   37047 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0513 17:34:50.548472   37047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0513 17:34:50.551409   37047 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0513 17:34:50.551424   37047 kubeadm.go:156] found existing configuration files:
	
	I0513 17:34:50.551445   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/admin.conf
	I0513 17:34:50.553954   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0513 17:34:50.553981   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0513 17:34:50.556997   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/kubelet.conf
	I0513 17:34:50.559975   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0513 17:34:50.560015   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0513 17:34:50.562563   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/controller-manager.conf
	I0513 17:34:50.565300   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0513 17:34:50.565324   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0513 17:34:50.568298   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/scheduler.conf
	I0513 17:34:50.570901   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0513 17:34:50.570928   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0513 17:34:50.573549   37047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0513 17:34:50.576664   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:34:50.599223   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:34:51.081674   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:34:48.931667   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:48.931743   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:48.945986   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:48.946041   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:48.963491   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:48.963541   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:48.974902   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:48.974969   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:48.987808   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:48.987863   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:48.999863   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:48.999908   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:49.010959   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:49.011013   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:49.022474   36897 logs.go:276] 0 containers: []
	W0513 17:34:49.022485   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:49.022520   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:49.034595   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:49.034619   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:49.034625   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:49.054170   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:49.054183   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:49.066830   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:49.066841   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:49.104742   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:49.104755   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:49.120239   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:49.120250   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:49.134023   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:49.134036   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:49.149198   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:49.149209   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:49.154100   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:49.154114   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:49.170410   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:49.170424   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:49.186556   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:49.186571   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:49.206064   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:49.206078   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:49.231625   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:49.231639   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:49.244807   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:49.244821   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:49.286275   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:49.286288   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:49.298049   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:49.298060   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:49.320813   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:49.320824   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:49.334588   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:49.334602   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:51.219257   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:34:51.244307   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:34:51.271313   37047 api_server.go:52] waiting for apiserver process to appear ...
	I0513 17:34:51.271390   37047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:34:51.772008   37047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:34:52.273427   37047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:34:52.277960   37047 api_server.go:72] duration metric: took 1.006669333s to wait for apiserver process to appear ...
	I0513 17:34:52.277969   37047 api_server.go:88] waiting for apiserver healthz status ...
	I0513 17:34:52.277983   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:51.850022   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:57.280007   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:57.280045   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:56.852122   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:56.852282   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:34:56.863945   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:34:56.864018   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:34:56.874373   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:34:56.874441   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:34:56.885401   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:34:56.885467   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:34:56.895919   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:34:56.895984   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:34:56.906529   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:34:56.906603   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:34:56.916917   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:34:56.916987   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:34:56.927015   36897 logs.go:276] 0 containers: []
	W0513 17:34:56.927028   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:34:56.927091   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:34:56.937822   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:34:56.937844   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:34:56.937850   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:34:56.948965   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:34:56.948982   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:34:56.963130   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:34:56.963139   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:34:56.976263   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:34:56.976276   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:34:56.987460   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:34:56.987471   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:34:56.999037   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:34:56.999047   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:34:57.015029   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:34:57.015043   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:34:57.026317   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:34:57.026332   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:34:57.037707   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:34:57.037720   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:34:57.042021   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:34:57.042027   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:34:57.056906   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:34:57.056917   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:34:57.075080   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:34:57.075092   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:34:57.088741   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:34:57.088750   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:34:57.104620   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:34:57.104631   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:34:57.129876   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:34:57.129887   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:34:57.153178   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:34:57.153187   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:34:57.193155   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:34:57.193164   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:34:59.731952   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:02.280189   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:02.280229   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:04.732371   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:04.732545   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:04.746739   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:35:04.746820   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:04.758210   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:35:04.758275   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:04.769014   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:35:04.769080   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:04.779390   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:35:04.779453   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:04.793172   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:35:04.793244   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:04.808339   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:35:04.808403   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:04.818676   36897 logs.go:276] 0 containers: []
	W0513 17:35:04.818687   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:04.818734   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:04.830312   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:35:04.830333   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:35:04.830339   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:35:04.842509   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:35:04.842520   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:35:04.853745   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:04.853756   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:04.892142   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:35:04.892149   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:35:04.905976   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:35:04.905986   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:35:04.918541   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:35:04.918552   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:35:04.929622   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:35:04.929634   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:35:04.947698   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:35:04.947708   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:35:04.960844   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:35:04.960855   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:04.972548   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:04.972559   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:05.008679   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:35:05.008689   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:35:05.023416   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:35:05.023425   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:35:05.039224   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:35:05.039235   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:35:05.050974   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:05.050988   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:05.055324   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:35:05.055330   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:35:05.068553   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:05.068563   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:05.092930   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:35:05.092939   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:35:07.280535   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:07.280584   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:07.608886   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:12.281055   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:12.281089   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:12.611043   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:12.611124   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:12.622553   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:35:12.622625   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:12.642554   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:35:12.642619   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:12.653490   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:35:12.653554   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:12.663913   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:35:12.663983   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:12.675190   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:35:12.675257   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:12.685722   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:35:12.685785   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:12.706632   36897 logs.go:276] 0 containers: []
	W0513 17:35:12.706645   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:12.706703   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:12.717312   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:35:12.717329   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:35:12.717335   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:35:12.731551   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:35:12.731562   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:35:12.743347   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:35:12.743357   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:35:12.759216   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:35:12.759229   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:35:12.771054   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:35:12.771064   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:35:12.782451   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:35:12.782467   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:12.794186   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:35:12.794196   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:35:12.807200   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:35:12.807214   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:35:12.819156   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:35:12.819166   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:35:12.833459   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:35:12.833472   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:35:12.847657   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:35:12.847666   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:35:12.859356   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:12.859367   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:12.863619   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:12.863625   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:12.897420   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:35:12.897429   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:35:12.908381   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:12.908390   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:12.931036   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:12.931045   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:12.968730   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:35:12.968740   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:35:15.487162   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:17.281673   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:17.281733   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:20.489436   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:20.489799   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:20.524037   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:35:20.524173   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:20.549254   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:35:20.549339   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:20.562972   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:35:20.563036   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:20.576093   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:35:20.576175   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:20.586783   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:35:20.586856   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:20.597418   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:35:20.597489   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:20.608164   36897 logs.go:276] 0 containers: []
	W0513 17:35:20.608176   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:20.608238   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:20.618873   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:35:20.618892   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:20.618897   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:20.623292   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:35:20.623302   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:35:20.634531   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:35:20.634541   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:35:20.650743   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:20.650755   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:20.673340   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:35:20.673351   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:35:20.687175   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:35:20.687187   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:35:20.698952   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:35:20.698966   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:35:20.715925   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:35:20.715936   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:35:20.728506   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:20.728517   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:20.769167   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:20.769177   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:20.804242   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:35:20.804256   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:20.824290   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:35:20.824301   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:35:20.835943   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:35:20.835953   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:35:20.849963   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:35:20.849976   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:35:20.863342   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:35:20.863356   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:35:20.877692   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:35:20.877702   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:35:20.889481   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:35:20.889492   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:35:22.282771   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:22.282836   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:23.402797   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:27.284074   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:27.284099   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:28.405168   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:28.405548   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:28.437786   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:35:28.437925   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:28.455835   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:35:28.455928   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:28.469078   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:35:28.469160   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:28.481269   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:35:28.481338   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:28.499121   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:35:28.499196   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:28.509759   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:35:28.509829   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:28.520397   36897 logs.go:276] 0 containers: []
	W0513 17:35:28.520406   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:28.520460   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:28.531049   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:35:28.531066   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:35:28.531072   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:35:28.542856   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:35:28.542868   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:35:28.554581   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:35:28.554592   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:35:28.569509   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:35:28.569520   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:35:28.582225   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:35:28.582236   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:35:28.594163   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:35:28.594174   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:35:28.612474   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:35:28.612484   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:28.625070   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:28.625081   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:28.629590   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:35:28.629597   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:35:28.644465   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:35:28.644477   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:35:28.659169   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:35:28.659178   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:35:28.675058   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:28.675068   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:28.714308   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:35:28.714317   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:35:28.727154   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:35:28.727165   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:35:28.738658   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:28.738673   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:28.762456   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:28.762466   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:28.797990   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:35:28.798004   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:35:31.317405   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:32.285436   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:32.285487   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:36.319622   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:36.319811   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:36.334686   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:35:36.334762   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:36.347536   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:35:36.347603   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:36.358623   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:35:36.358695   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:36.370414   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:35:36.370478   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:36.380857   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:35:36.380925   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:36.391582   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:35:36.391640   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:36.402194   36897 logs.go:276] 0 containers: []
	W0513 17:35:36.402205   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:36.402259   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:36.412575   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:35:36.412594   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:35:36.412600   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:35:36.424139   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:35:36.424152   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:35:36.435464   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:36.435478   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:36.475329   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:36.475337   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:36.479337   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:36.479343   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:37.287397   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:37.287416   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:36.516564   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:35:36.516578   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:35:36.530394   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:35:36.530404   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:35:36.542100   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:35:36.542113   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:35:36.559686   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:35:36.559695   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:35:36.572212   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:36.572225   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:36.594357   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:35:36.594363   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:35:36.608412   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:35:36.608422   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:35:36.619842   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:35:36.619853   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:35:36.637824   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:35:36.637841   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:35:36.661311   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:35:36.661322   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:35:36.689117   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:35:36.689133   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:35:36.701943   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:35:36.701955   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:39.216185   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:42.289546   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:42.289595   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:44.218368   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:44.218491   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:44.232757   36897 logs.go:276] 2 containers: [63a5d970fd15 8176cd4f3d53]
	I0513 17:35:44.232836   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:44.243555   36897 logs.go:276] 2 containers: [6d0a1f5f9486 b8a0d562dc85]
	I0513 17:35:44.243637   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:44.254448   36897 logs.go:276] 1 containers: [705d5605d025]
	I0513 17:35:44.254519   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:44.265094   36897 logs.go:276] 2 containers: [ec34211fbe04 eaba027fa937]
	I0513 17:35:44.265167   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:44.277165   36897 logs.go:276] 1 containers: [be219f684afb]
	I0513 17:35:44.277230   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:44.288583   36897 logs.go:276] 2 containers: [d702972a5e7d 7fc083126b07]
	I0513 17:35:44.288653   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:44.298499   36897 logs.go:276] 0 containers: []
	W0513 17:35:44.298514   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:44.298580   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:44.309139   36897 logs.go:276] 2 containers: [cc04d007abfe 6a1719c9dec2]
	I0513 17:35:44.309157   36897 logs.go:123] Gathering logs for etcd [b8a0d562dc85] ...
	I0513 17:35:44.309162   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a0d562dc85"
	I0513 17:35:44.324186   36897 logs.go:123] Gathering logs for kube-scheduler [eaba027fa937] ...
	I0513 17:35:44.324197   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaba027fa937"
	I0513 17:35:44.339855   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:44.339865   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:44.375141   36897 logs.go:123] Gathering logs for kube-apiserver [8176cd4f3d53] ...
	I0513 17:35:44.375151   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8176cd4f3d53"
	I0513 17:35:44.387698   36897 logs.go:123] Gathering logs for kube-scheduler [ec34211fbe04] ...
	I0513 17:35:44.387709   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec34211fbe04"
	I0513 17:35:44.398919   36897 logs.go:123] Gathering logs for kube-controller-manager [7fc083126b07] ...
	I0513 17:35:44.398931   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fc083126b07"
	I0513 17:35:44.410415   36897 logs.go:123] Gathering logs for storage-provisioner [6a1719c9dec2] ...
	I0513 17:35:44.410431   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a1719c9dec2"
	I0513 17:35:44.425303   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:44.425316   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:44.447548   36897 logs.go:123] Gathering logs for kube-apiserver [63a5d970fd15] ...
	I0513 17:35:44.447556   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a5d970fd15"
	I0513 17:35:44.464959   36897 logs.go:123] Gathering logs for etcd [6d0a1f5f9486] ...
	I0513 17:35:44.464969   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d0a1f5f9486"
	I0513 17:35:44.482795   36897 logs.go:123] Gathering logs for coredns [705d5605d025] ...
	I0513 17:35:44.482806   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705d5605d025"
	I0513 17:35:44.494552   36897 logs.go:123] Gathering logs for kube-proxy [be219f684afb] ...
	I0513 17:35:44.494563   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be219f684afb"
	I0513 17:35:44.506673   36897 logs.go:123] Gathering logs for kube-controller-manager [d702972a5e7d] ...
	I0513 17:35:44.506686   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d702972a5e7d"
	I0513 17:35:44.524620   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:35:44.524631   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:44.537882   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:44.537894   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:44.578809   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:44.578818   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:44.583129   36897 logs.go:123] Gathering logs for storage-provisioner [cc04d007abfe] ...
	I0513 17:35:44.583138   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc04d007abfe"
	I0513 17:35:47.291865   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:47.291904   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:47.096956   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:52.099288   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:52.099420   36897 kubeadm.go:591] duration metric: took 4m14.50874325s to restartPrimaryControlPlane
	W0513 17:35:52.099530   36897 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0513 17:35:52.099583   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0513 17:35:53.121838   36897 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.022258542s)
	I0513 17:35:53.121909   36897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 17:35:53.127363   36897 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0513 17:35:53.130262   36897 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0513 17:35:53.133018   36897 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0513 17:35:53.133024   36897 kubeadm.go:156] found existing configuration files:
	
	I0513 17:35:53.133051   36897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/admin.conf
	I0513 17:35:53.135595   36897 kubeadm.go:162] "https://control-plane.minikube.internal:56125" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0513 17:35:53.135617   36897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0513 17:35:53.138115   36897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/kubelet.conf
	I0513 17:35:53.141288   36897 kubeadm.go:162] "https://control-plane.minikube.internal:56125" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0513 17:35:53.141310   36897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0513 17:35:53.144266   36897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/controller-manager.conf
	I0513 17:35:53.147014   36897 kubeadm.go:162] "https://control-plane.minikube.internal:56125" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0513 17:35:53.147040   36897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0513 17:35:53.149926   36897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/scheduler.conf
	I0513 17:35:53.152809   36897 kubeadm.go:162] "https://control-plane.minikube.internal:56125" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:56125 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0513 17:35:53.152831   36897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0513 17:35:53.155233   36897 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0513 17:35:53.170895   36897 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0513 17:35:53.170921   36897 kubeadm.go:309] [preflight] Running pre-flight checks
	I0513 17:35:53.217757   36897 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0513 17:35:53.217817   36897 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0513 17:35:53.217893   36897 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0513 17:35:53.267503   36897 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0513 17:35:53.271637   36897 out.go:204]   - Generating certificates and keys ...
	I0513 17:35:53.271667   36897 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0513 17:35:53.271693   36897 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0513 17:35:53.271789   36897 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0513 17:35:53.271846   36897 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0513 17:35:53.271895   36897 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0513 17:35:53.271920   36897 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0513 17:35:53.271964   36897 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0513 17:35:53.271997   36897 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0513 17:35:53.272085   36897 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0513 17:35:53.272119   36897 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0513 17:35:53.272140   36897 kubeadm.go:309] [certs] Using the existing "sa" key
	I0513 17:35:53.272202   36897 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0513 17:35:53.319001   36897 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0513 17:35:53.764616   36897 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0513 17:35:53.806727   36897 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0513 17:35:53.905176   36897 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0513 17:35:53.936178   36897 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0513 17:35:53.936483   36897 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0513 17:35:53.936518   36897 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0513 17:35:54.009364   36897 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0513 17:35:52.294048   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:52.294172   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:52.305488   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:35:52.305560   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:52.318463   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:35:52.318549   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:52.329652   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:35:52.329726   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:52.341373   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:35:52.341456   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:52.352646   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:35:52.352729   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:52.364215   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:35:52.364317   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:52.375532   37047 logs.go:276] 0 containers: []
	W0513 17:35:52.375543   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:52.375606   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:52.387322   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:35:52.387342   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:35:52.387347   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:35:52.415155   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:35:52.415166   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:35:52.432935   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:35:52.432947   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:52.446185   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:52.446199   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:52.576742   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:35:52.576757   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:35:52.592737   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:35:52.592755   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:35:52.605088   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:52.605102   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:52.631847   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:52.631869   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:52.670713   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:52.670738   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:52.675281   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:35:52.675293   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:35:52.688441   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:35:52.688455   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:35:52.701758   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:35:52.701771   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:35:52.722518   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:35:52.722533   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:35:52.736967   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:35:52.736982   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:35:52.748568   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:35:52.748579   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:35:52.763899   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:35:52.763919   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:35:52.785644   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:35:52.785665   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:35:55.300356   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:54.012506   36897 out.go:204]   - Booting up control plane ...
	I0513 17:35:54.012549   36897 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0513 17:35:54.012594   36897 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0513 17:35:54.012627   36897 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0513 17:35:54.012675   36897 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0513 17:35:54.012766   36897 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0513 17:35:58.517031   36897 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.505628 seconds
	I0513 17:35:58.517225   36897 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0513 17:35:58.523691   36897 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0513 17:35:59.055138   36897 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0513 17:35:59.055419   36897 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-056000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0513 17:35:59.559215   36897 kubeadm.go:309] [bootstrap-token] Using token: yi4utz.blo7i6p65ke8d3ns
	I0513 17:35:59.563322   36897 out.go:204]   - Configuring RBAC rules ...
	I0513 17:35:59.563404   36897 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0513 17:35:59.563463   36897 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0513 17:35:59.566999   36897 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0513 17:35:59.568219   36897 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0513 17:35:59.569209   36897 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0513 17:35:59.570172   36897 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0513 17:35:59.573622   36897 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0513 17:35:59.716740   36897 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0513 17:35:59.963343   36897 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0513 17:35:59.963796   36897 kubeadm.go:309] 
	I0513 17:35:59.963834   36897 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0513 17:35:59.963840   36897 kubeadm.go:309] 
	I0513 17:35:59.963876   36897 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0513 17:35:59.963878   36897 kubeadm.go:309] 
	I0513 17:35:59.963892   36897 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0513 17:35:59.963925   36897 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0513 17:35:59.963970   36897 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0513 17:35:59.963975   36897 kubeadm.go:309] 
	I0513 17:35:59.964000   36897 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0513 17:35:59.964004   36897 kubeadm.go:309] 
	I0513 17:35:59.964027   36897 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0513 17:35:59.964031   36897 kubeadm.go:309] 
	I0513 17:35:59.964077   36897 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0513 17:35:59.964156   36897 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0513 17:35:59.964206   36897 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0513 17:35:59.964211   36897 kubeadm.go:309] 
	I0513 17:35:59.964278   36897 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0513 17:35:59.964323   36897 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0513 17:35:59.964327   36897 kubeadm.go:309] 
	I0513 17:35:59.964403   36897 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yi4utz.blo7i6p65ke8d3ns \
	I0513 17:35:59.964484   36897 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4d136b3dcf7c79b0a3d022a6d22a2cfa7847863b6538a3d210720b96945b0713 \
	I0513 17:35:59.964505   36897 kubeadm.go:309] 	--control-plane 
	I0513 17:35:59.964509   36897 kubeadm.go:309] 
	I0513 17:35:59.964549   36897 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0513 17:35:59.964552   36897 kubeadm.go:309] 
	I0513 17:35:59.964589   36897 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yi4utz.blo7i6p65ke8d3ns \
	I0513 17:35:59.964643   36897 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4d136b3dcf7c79b0a3d022a6d22a2cfa7847863b6538a3d210720b96945b0713 
	I0513 17:35:59.964735   36897 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0513 17:35:59.964746   36897 cni.go:84] Creating CNI manager for ""
	I0513 17:35:59.964753   36897 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:35:59.968205   36897 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0513 17:35:59.974333   36897 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0513 17:35:59.977340   36897 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0513 17:35:59.982576   36897 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0513 17:35:59.982627   36897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 17:35:59.982636   36897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-056000 minikube.k8s.io/updated_at=2024_05_13T17_35_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=running-upgrade-056000 minikube.k8s.io/primary=true
	I0513 17:36:00.020118   36897 kubeadm.go:1107] duration metric: took 37.53225ms to wait for elevateKubeSystemPrivileges
	I0513 17:36:00.020135   36897 ops.go:34] apiserver oom_adj: -16
	W0513 17:36:00.024752   36897 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0513 17:36:00.024761   36897 kubeadm.go:393] duration metric: took 4m22.478784792s to StartCluster
	I0513 17:36:00.024771   36897 settings.go:142] acquiring lock: {Name:mk9ef358ebdddf34ee47447e0095ef8dc921e138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:36:00.024916   36897 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:36:00.025307   36897 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/kubeconfig: {Name:mk4053cf25e56f4e4112583f80c31fb87a4c6322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:36:00.025481   36897 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:36:00.030305   36897 out.go:177] * Verifying Kubernetes components...
	I0513 17:36:00.025501   36897 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0513 17:36:00.025593   36897 config.go:182] Loaded profile config "running-upgrade-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:36:00.038252   36897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:36:00.038281   36897 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-056000"
	I0513 17:36:00.038296   36897 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-056000"
	I0513 17:36:00.038311   36897 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-056000"
	I0513 17:36:00.038325   36897 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-056000"
	W0513 17:36:00.038329   36897 addons.go:243] addon storage-provisioner should already be in state true
	I0513 17:36:00.038338   36897 host.go:66] Checking if "running-upgrade-056000" exists ...
	I0513 17:36:00.039386   36897 kapi.go:59] client config for running-upgrade-056000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/running-upgrade-056000/client.key", CAFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b8de10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 17:36:00.039752   36897 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-056000"
	W0513 17:36:00.039757   36897 addons.go:243] addon default-storageclass should already be in state true
	I0513 17:36:00.039764   36897 host.go:66] Checking if "running-upgrade-056000" exists ...
	I0513 17:36:00.043313   36897 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:36:00.302467   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:00.302567   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:00.314397   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:00.314472   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:00.332047   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:00.332118   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:00.348528   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:00.348620   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:00.359682   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:00.359749   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:00.370175   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:00.370246   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:00.381076   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:00.381145   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:00.392397   37047 logs.go:276] 0 containers: []
	W0513 17:36:00.392409   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:00.392465   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:00.405672   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:00.405692   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:00.405698   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:00.421945   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:00.421960   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:00.435022   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:00.435036   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:00.461177   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:00.461192   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:00.475795   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:00.475812   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:00.490568   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:00.490581   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:00.510508   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:00.510526   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:00.525512   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:00.525525   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:00.530449   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:00.530462   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:00.545574   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:00.545588   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:00.557111   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:00.557123   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:00.577302   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:00.577312   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:00.597257   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:00.597271   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:00.638041   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:00.638051   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:00.652532   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:00.652546   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:00.663874   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:00.663888   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:00.701760   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:00.701768   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:00.047180   36897 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 17:36:00.047187   36897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0513 17:36:00.047194   36897 sshutil.go:53] new ssh client: &{IP:localhost Port:56093 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/running-upgrade-056000/id_rsa Username:docker}
	I0513 17:36:00.047969   36897 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0513 17:36:00.047974   36897 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0513 17:36:00.047978   36897 sshutil.go:53] new ssh client: &{IP:localhost Port:56093 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/running-upgrade-056000/id_rsa Username:docker}
	I0513 17:36:00.120401   36897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 17:36:00.127712   36897 api_server.go:52] waiting for apiserver process to appear ...
	I0513 17:36:00.127766   36897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:36:00.131876   36897 api_server.go:72] duration metric: took 106.386084ms to wait for apiserver process to appear ...
	I0513 17:36:00.131883   36897 api_server.go:88] waiting for apiserver healthz status ...
	I0513 17:36:00.131889   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:00.165320   36897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 17:36:00.189669   36897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0513 17:36:03.229584   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:05.133869   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:05.133893   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:08.231732   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:08.231868   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:08.243318   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:08.243397   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:08.254636   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:08.254719   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:08.265916   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:08.265988   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:08.276158   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:08.276231   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:08.286808   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:08.286881   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:08.300899   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:08.300966   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:08.311002   37047 logs.go:276] 0 containers: []
	W0513 17:36:08.311014   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:08.311074   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:08.321209   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:08.321230   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:08.321236   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:08.357749   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:08.357762   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:08.372086   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:08.372098   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:08.383559   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:08.383571   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:08.396230   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:08.396244   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:08.400192   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:08.400200   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:08.411754   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:08.411766   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:08.429087   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:08.429098   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:08.454414   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:08.454423   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:08.492458   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:08.492465   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:08.506479   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:08.506494   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:08.520481   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:08.520491   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:08.536231   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:08.536242   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:08.553132   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:08.553143   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:08.572825   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:08.572835   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:08.584425   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:08.584437   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:08.595876   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:08.595888   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:11.121969   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:10.134057   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:10.134083   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:16.124146   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:16.124257   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:16.135141   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:16.135224   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:15.134734   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:15.134780   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:16.146495   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:16.146574   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:16.156809   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:16.156875   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:16.168723   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:16.168799   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:16.179311   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:16.179382   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:16.190831   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:16.190898   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:16.201242   37047 logs.go:276] 0 containers: []
	W0513 17:36:16.201257   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:16.201314   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:16.211777   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:16.211794   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:16.211800   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:16.225425   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:16.225436   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:16.241507   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:16.241523   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:16.246075   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:16.246086   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:16.282681   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:16.282692   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:16.299775   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:16.299789   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:16.311938   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:16.311951   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:16.323408   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:16.323419   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:16.347524   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:16.347533   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:16.362059   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:16.362069   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:16.381355   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:16.381364   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:16.393472   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:16.393482   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:16.431599   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:16.431605   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:16.449162   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:16.449173   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:16.460512   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:16.460521   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:16.485638   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:16.485646   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:16.496941   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:16.496957   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:19.013756   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:20.135248   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:20.135302   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:24.016217   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:24.016383   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:24.026919   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:24.026990   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:24.037487   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:24.037577   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:24.054101   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:24.054171   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:24.064295   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:24.064372   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:24.078148   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:24.078213   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:24.089428   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:24.089495   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:24.099414   37047 logs.go:276] 0 containers: []
	W0513 17:36:24.099428   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:24.099483   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:24.110121   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:24.110140   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:24.110145   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:24.124598   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:24.124610   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:24.142509   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:24.142523   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:24.156545   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:24.156554   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:24.167989   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:24.168000   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:24.186125   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:24.186137   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:24.223024   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:24.223034   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:24.236668   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:24.236678   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:24.248513   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:24.248524   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:24.260478   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:24.260488   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:24.273814   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:24.273824   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:24.297691   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:24.297710   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:24.333274   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:24.333284   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:24.358241   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:24.358254   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:24.369761   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:24.369773   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:24.384064   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:24.384073   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:24.403977   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:24.403986   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:25.136002   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:25.136024   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:30.136740   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:30.136790   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0513 17:36:30.539603   36897 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0513 17:36:30.542042   36897 out.go:177] * Enabled addons: storage-provisioner
	I0513 17:36:26.910341   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:30.552778   36897 addons.go:505] duration metric: took 30.527894708s for enable addons: enabled=[storage-provisioner]
	I0513 17:36:31.912481   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:31.912598   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:31.924604   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:31.924677   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:31.935320   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:31.935410   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:31.945907   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:31.945974   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:31.956827   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:31.956900   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:31.966959   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:31.967033   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:31.977581   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:31.977645   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:31.988328   37047 logs.go:276] 0 containers: []
	W0513 17:36:31.988339   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:31.988396   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:31.998803   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:31.998822   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:31.998828   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:32.036978   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:32.036987   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:32.062654   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:32.062664   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:32.076942   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:32.076953   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:32.089008   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:32.089019   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:32.103556   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:32.103567   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:32.124904   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:32.124913   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:32.162193   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:32.162202   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:32.166806   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:32.166813   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:32.190078   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:32.190083   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:32.202335   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:32.202345   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:32.214718   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:32.214729   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:32.226306   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:32.226315   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:32.238427   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:32.238437   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:32.255466   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:32.255480   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:32.267253   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:32.267265   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:32.281110   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:32.281118   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:34.797701   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:35.138206   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:35.138242   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:39.798643   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:39.798755   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:39.810268   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:39.810345   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:39.821298   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:39.821367   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:39.832041   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:39.832106   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:39.842469   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:39.842547   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:39.853339   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:39.853409   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:39.864356   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:39.864425   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:39.874834   37047 logs.go:276] 0 containers: []
	W0513 17:36:39.874845   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:39.874903   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:39.885986   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:39.886004   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:39.886009   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:39.921825   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:39.921835   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:39.936418   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:39.936429   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:39.960522   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:39.960538   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:39.978186   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:39.978196   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:39.997808   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:39.997818   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:40.009201   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:40.009211   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:40.020601   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:40.020614   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:40.039097   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:40.039108   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:40.051383   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:40.051394   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:40.063373   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:40.063383   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:40.074371   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:40.074381   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:40.097648   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:40.097656   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:40.134661   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:40.134671   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:40.138726   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:40.138731   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:40.152795   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:40.152810   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:40.168152   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:40.168165   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:40.139019   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:40.139041   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:42.682111   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:45.140624   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:45.140659   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:47.684360   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:47.684549   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:47.710043   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:47.710140   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:47.724261   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:47.724344   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:47.736011   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:47.736081   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:47.747034   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:47.747116   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:47.757760   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:47.757829   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:47.768650   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:47.768719   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:47.779101   37047 logs.go:276] 0 containers: []
	W0513 17:36:47.779111   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:47.779170   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:47.789508   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:47.789525   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:47.789533   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:47.794730   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:47.794740   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:47.806754   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:47.806765   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:47.818704   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:47.818713   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:47.845806   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:47.845816   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:47.856539   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:47.856548   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:47.867697   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:47.867707   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:47.892651   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:47.892658   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:47.904357   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:47.904370   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:47.929257   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:47.929267   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:47.944115   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:47.944127   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:47.959416   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:47.959430   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:47.977001   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:47.977010   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:48.015704   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:48.015716   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:48.055476   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:48.055488   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:48.072302   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:48.072348   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:48.086457   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:48.086471   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:50.602856   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:50.142736   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:50.142757   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:55.605109   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:55.605302   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:55.628459   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:55.628578   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:55.644088   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:55.644161   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:55.657032   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:55.657090   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:55.667526   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:55.667597   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:55.677856   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:55.677935   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:55.693974   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:55.694040   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:55.704291   37047 logs.go:276] 0 containers: []
	W0513 17:36:55.704302   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:55.704357   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:55.714590   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:55.714607   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:55.714622   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:55.752495   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:55.752506   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:55.796371   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:55.796388   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:55.829672   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:55.829685   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:55.846032   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:55.846044   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:55.859292   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:55.859305   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:55.872834   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:55.872846   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:55.898626   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:55.898637   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:55.910388   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:55.910399   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:55.924729   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:55.924739   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:55.939186   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:55.939196   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:55.950738   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:55.950749   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:55.965387   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:55.965397   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:55.982683   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:55.982695   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:56.002721   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:56.002731   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:56.006998   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:56.007005   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:56.026090   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:56.026100   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:55.144599   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:55.144629   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:58.539733   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:00.146729   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:00.146844   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:00.158826   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:00.158890   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:00.169167   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:00.169225   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:00.179666   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:00.179740   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:00.190310   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:00.190372   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:00.200804   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:00.200880   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:00.210827   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:00.210892   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:00.221436   36897 logs.go:276] 0 containers: []
	W0513 17:37:00.221447   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:00.221499   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:00.232465   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:00.232480   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:00.232485   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:00.267911   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:00.267923   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:00.282593   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:00.282604   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:00.299024   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:00.299037   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:00.310661   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:00.310674   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:00.322366   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:00.322378   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:00.340377   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:00.340387   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:00.364404   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:00.364410   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:00.368592   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:00.368597   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:00.379549   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:00.379560   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:00.391253   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:00.391263   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:00.406101   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:00.406109   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:00.418179   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:00.418194   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:03.542047   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:03.542279   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:03.569138   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:03.569244   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:03.585032   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:03.585115   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:03.597107   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:03.597177   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:03.607934   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:03.607999   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:03.618559   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:03.618629   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:03.629615   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:03.629677   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:03.639832   37047 logs.go:276] 0 containers: []
	W0513 17:37:03.639845   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:03.639895   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:03.650339   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:03.650355   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:03.650361   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:03.668513   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:03.668524   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:03.705040   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:03.705048   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:03.709001   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:03.709009   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:03.729390   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:03.729401   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:03.741016   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:03.741027   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:03.755214   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:03.755223   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:03.783310   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:03.783323   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:03.795081   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:03.795091   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:03.811776   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:03.811791   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:03.837438   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:03.837447   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:03.849019   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:03.849030   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:03.860800   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:03.860815   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:03.885193   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:03.885200   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:03.922388   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:03.922399   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:03.937802   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:03.937812   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:03.957293   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:03.957304   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:02.955137   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:06.470498   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:07.957365   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:07.957542   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:07.973307   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:07.973393   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:07.985328   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:07.985393   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:07.996409   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:07.996480   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:08.007050   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:08.007117   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:08.017273   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:08.017346   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:08.027336   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:08.027397   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:08.038179   36897 logs.go:276] 0 containers: []
	W0513 17:37:08.038190   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:08.038242   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:08.049017   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:08.049031   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:08.049037   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:08.060613   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:08.060624   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:08.072143   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:08.072153   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:08.095502   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:08.095513   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:08.106745   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:08.106756   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:08.142212   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:08.142223   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:08.180024   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:08.180035   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:08.194005   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:08.194016   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:08.212638   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:08.212647   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:08.225191   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:08.225202   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:08.248539   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:08.248549   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:08.266344   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:08.266354   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:08.277867   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:08.277877   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:10.783511   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:11.472654   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:11.472788   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:11.485829   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:11.485903   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:11.496992   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:11.497060   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:11.508943   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:11.509010   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:11.519517   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:11.519583   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:11.529841   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:11.529908   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:11.541127   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:11.541199   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:11.551632   37047 logs.go:276] 0 containers: []
	W0513 17:37:11.551643   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:11.551695   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:11.562246   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:11.562265   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:11.562271   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:11.600381   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:11.600392   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:11.604985   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:11.604990   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:11.615894   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:11.615905   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:11.627659   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:11.627671   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:11.643032   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:11.643046   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:11.654032   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:11.654043   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:11.666177   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:11.666187   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:11.701446   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:11.701460   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:11.726283   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:11.726294   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:11.740603   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:11.740613   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:11.752398   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:11.752407   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:11.771469   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:11.771479   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:11.783056   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:11.783069   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:11.797114   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:11.797125   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:11.811876   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:11.811885   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:11.829557   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:11.829569   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:14.354400   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:15.786045   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:15.786205   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:15.809377   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:15.809443   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:15.819640   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:15.819709   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:15.830240   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:15.830304   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:15.844693   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:15.844763   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:15.860529   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:15.860595   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:15.871006   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:15.871074   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:15.881410   36897 logs.go:276] 0 containers: []
	W0513 17:37:15.881421   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:15.881478   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:15.891967   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:15.891982   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:15.891989   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:15.926360   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:15.926372   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:15.940735   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:15.940749   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:15.952408   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:15.952418   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:15.976675   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:15.976683   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:15.988018   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:15.988031   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:16.021824   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:16.021836   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:16.026876   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:16.026885   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:16.040664   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:16.040675   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:16.052233   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:16.052245   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:16.067100   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:16.067110   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:16.079068   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:16.079078   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:16.096573   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:16.096583   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:19.356560   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:19.356771   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:19.374848   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:19.374932   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:19.388019   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:19.388089   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:19.399922   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:19.399988   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:19.410323   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:19.410390   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:19.420756   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:19.420815   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:19.431298   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:19.431372   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:19.441079   37047 logs.go:276] 0 containers: []
	W0513 17:37:19.441091   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:19.441150   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:19.451489   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:19.451509   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:19.451514   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:19.465197   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:19.465207   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:19.476741   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:19.476752   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:19.501125   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:19.501134   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:19.512712   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:19.512723   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:19.526947   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:19.526956   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:19.541973   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:19.541984   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:19.560232   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:19.560244   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:19.572024   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:19.572036   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:19.583471   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:19.583482   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:19.594732   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:19.594742   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:19.598651   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:19.598659   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:19.622632   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:19.622643   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:19.637471   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:19.637482   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:19.655041   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:19.655054   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:19.692845   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:19.692852   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:19.728215   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:19.728225   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:18.610034   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:22.248727   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:23.612627   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:23.612847   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:23.638820   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:23.638918   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:23.654933   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:23.655011   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:23.667198   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:23.667265   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:23.678164   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:23.678236   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:23.688417   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:23.688479   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:23.698887   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:23.698949   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:23.714013   36897 logs.go:276] 0 containers: []
	W0513 17:37:23.714023   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:23.714075   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:23.724680   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:23.724695   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:23.724701   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:23.737389   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:23.737400   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:23.754480   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:23.754493   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:23.765973   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:23.765984   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:23.778127   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:23.778138   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:23.813112   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:23.813125   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:23.817880   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:23.817887   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:23.829486   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:23.829496   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:23.844755   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:23.844765   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:23.867914   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:23.867924   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:23.901444   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:23.901454   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:23.916578   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:23.916590   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:23.930371   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:23.930383   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:26.443818   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:27.251019   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:27.251172   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:27.263735   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:27.263812   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:27.274700   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:27.274769   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:27.285029   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:27.285097   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:27.295848   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:27.295916   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:27.306120   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:27.306197   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:27.316479   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:27.316543   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:27.326488   37047 logs.go:276] 0 containers: []
	W0513 17:37:27.326497   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:27.326552   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:27.337067   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:27.337085   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:27.337091   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:27.360310   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:27.360319   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:27.397906   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:27.397920   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:27.412843   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:27.412857   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:27.438676   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:27.438685   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:27.458009   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:27.458018   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:27.468966   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:27.468978   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:27.482988   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:27.482997   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:27.494539   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:27.494551   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:27.505862   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:27.505873   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:27.540852   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:27.540862   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:27.555341   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:27.555351   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:27.566549   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:27.566574   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:27.583461   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:27.583471   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:27.587735   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:27.587740   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:27.599334   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:27.599344   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:27.614969   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:27.614979   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:30.128460   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:31.445969   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:31.446081   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:31.461483   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:31.461576   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:31.471973   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:31.472043   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:35.128863   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:35.129008   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:35.149967   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:35.150057   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:35.162291   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:35.162363   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:35.173352   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:35.173418   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:35.183742   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:35.183814   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:35.194725   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:35.194792   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:35.205546   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:35.205606   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:35.215601   37047 logs.go:276] 0 containers: []
	W0513 17:37:35.215611   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:35.215669   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:35.226174   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:35.226191   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:35.226196   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:35.240146   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:35.240156   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:35.251486   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:35.251498   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:35.290011   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:35.290021   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:35.304975   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:35.304986   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:35.316808   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:35.316819   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:35.339523   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:35.339529   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:35.350758   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:35.350768   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:35.362283   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:35.362294   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:35.366438   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:35.366444   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:35.400131   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:35.400141   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:35.415298   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:35.415308   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:35.433191   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:35.433203   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:35.448531   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:35.448543   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:35.463222   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:35.463233   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:35.487952   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:35.487962   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:35.510612   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:35.510621   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:31.482328   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:31.484213   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:31.494418   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:31.494484   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:31.508687   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:31.508756   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:31.519043   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:31.519110   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:31.529462   36897 logs.go:276] 0 containers: []
	W0513 17:37:31.529472   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:31.529523   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:31.539364   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:31.539379   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:31.539384   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:31.551799   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:31.551811   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:31.563638   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:31.563648   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:31.580675   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:31.580685   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:31.593290   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:31.593300   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:31.607888   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:31.607896   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:31.621586   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:31.621598   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:31.632966   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:31.632975   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:31.649354   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:31.649368   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:31.664009   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:31.664019   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:31.698736   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:31.698743   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:31.703409   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:31.703416   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:31.739227   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:31.739239   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:34.266064   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:38.032924   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:39.268286   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:39.268397   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:39.279528   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:39.279604   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:39.294957   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:39.295021   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:39.304881   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:39.304943   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:39.315890   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:39.315956   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:39.326117   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:39.326183   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:39.337478   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:39.337543   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:39.351938   36897 logs.go:276] 0 containers: []
	W0513 17:37:39.351952   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:39.352007   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:39.362387   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:39.362401   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:39.362407   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:39.397742   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:39.397752   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:39.402647   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:39.402657   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:39.473037   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:39.473050   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:39.487478   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:39.487489   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:39.499165   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:39.499176   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:39.513238   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:39.513248   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:39.533294   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:39.533304   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:39.551115   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:39.551125   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:39.562613   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:39.562628   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:39.586671   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:39.586683   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:39.598130   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:39.598145   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:39.613232   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:39.613242   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:43.035059   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:43.035288   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:43.059669   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:43.059773   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:43.074994   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:43.075074   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:43.088284   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:43.088355   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:43.099127   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:43.099202   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:43.108999   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:43.109068   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:43.120416   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:43.120486   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:43.130817   37047 logs.go:276] 0 containers: []
	W0513 17:37:43.130827   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:43.130880   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:43.141386   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:43.141403   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:43.141408   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:43.179315   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:43.179331   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:43.194068   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:43.194078   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:43.214092   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:43.214103   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:43.228159   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:43.228168   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:43.239807   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:43.239818   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:43.254678   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:43.254689   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:43.265540   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:43.265551   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:43.277454   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:43.277464   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:43.299179   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:43.299191   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:43.317606   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:43.317617   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:43.329386   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:43.329396   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:43.367127   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:43.367145   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:43.371624   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:43.371631   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:43.385887   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:43.385898   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:43.411667   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:43.411677   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:43.423294   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:43.423306   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:45.949231   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:42.126869   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:50.951521   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:50.951638   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:50.965286   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:50.965359   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:50.977145   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:50.977205   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:50.988487   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:50.988549   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:50.999019   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:50.999089   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:51.009268   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:51.009338   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:51.019625   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:51.019690   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:51.029684   37047 logs.go:276] 0 containers: []
	W0513 17:37:51.029696   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:51.029749   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:51.040220   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:51.040237   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:51.040242   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:51.076684   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:51.076692   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:51.090766   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:51.090780   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:51.105481   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:51.105494   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:51.122819   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:51.122830   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:47.129200   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:47.129470   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:47.162834   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:47.162958   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:47.180856   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:47.180928   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:47.194539   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:47.194618   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:47.210789   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:47.210865   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:47.221474   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:47.221544   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:47.232283   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:47.232350   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:47.242429   36897 logs.go:276] 0 containers: []
	W0513 17:37:47.242446   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:47.242528   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:47.253064   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:47.253079   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:47.253084   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:47.286575   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:47.286585   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:47.290726   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:47.290736   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:47.327465   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:47.327477   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:47.340959   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:47.340969   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:47.352603   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:47.352616   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:47.375467   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:47.375475   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:47.387375   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:47.387386   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:47.401828   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:47.401838   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:47.415606   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:47.415617   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:47.429544   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:47.429554   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:47.444109   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:47.444121   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:47.464493   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:47.464502   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:49.990522   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:51.140597   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:51.141169   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:51.180547   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:51.180566   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:51.194906   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:51.194920   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:51.212334   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:51.212348   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:51.225735   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:51.225747   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:51.251532   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:51.251543   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:51.263070   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:51.263080   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:51.281684   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:51.281694   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:51.293732   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:51.293746   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:51.316833   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:51.316843   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:51.321233   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:51.321239   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:51.335562   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:51.335571   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:53.851821   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:54.992760   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:54.992979   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:55.013391   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:37:55.013486   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:55.028055   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:37:55.028120   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:55.041089   36897 logs.go:276] 2 containers: [186d0aa98e14 8d8f0aa7f156]
	I0513 17:37:55.041173   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:55.052065   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:37:55.052126   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:55.062437   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:37:55.062511   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:55.082065   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:37:55.082137   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:55.092562   36897 logs.go:276] 0 containers: []
	W0513 17:37:55.092573   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:55.092627   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:55.103075   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:37:55.103092   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:55.103098   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:55.137161   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:55.137180   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:55.141640   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:37:55.141649   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:37:55.152910   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:55.152921   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:55.177857   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:37:55.177868   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:55.189587   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:37:55.189598   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:37:55.200996   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:37:55.201009   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:37:55.218814   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:55.218828   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:55.256694   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:37:55.256705   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:37:55.271283   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:37:55.271293   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:37:55.285272   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:37:55.285282   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:37:55.297676   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:37:55.297686   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:37:55.310231   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:37:55.310242   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:37:58.854073   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:58.854274   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:58.877021   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:58.877106   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:58.890226   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:58.890295   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:58.902106   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:58.902175   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:58.915698   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:58.915767   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:58.926085   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:58.926148   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:58.936529   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:58.936595   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:58.949102   37047 logs.go:276] 0 containers: []
	W0513 17:37:58.949118   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:58.949180   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:58.960058   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:58.960075   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:58.960082   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:58.997842   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:58.997849   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:59.001901   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:59.001910   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:59.019159   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:59.019169   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:59.042621   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:59.042629   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:59.057155   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:59.057165   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:59.073237   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:59.073250   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:59.084723   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:59.084735   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:59.097508   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:59.097520   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:59.143060   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:59.143077   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:59.158222   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:59.158235   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:59.185352   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:59.185366   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:59.200739   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:59.200750   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:59.212048   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:59.212058   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:59.226739   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:59.226750   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:59.253593   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:59.253603   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:59.264745   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:59.264758   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:57.827086   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:01.776590   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:02.829228   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:02.829431   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:02.850216   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:02.850312   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:02.864693   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:02.864761   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:02.877722   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:02.877802   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:02.888467   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:02.888536   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:02.898498   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:02.898565   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:02.909670   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:02.909739   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:02.919442   36897 logs.go:276] 0 containers: []
	W0513 17:38:02.919453   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:02.919508   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:02.932643   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:02.932660   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:02.932670   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:02.966373   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:02.966386   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:02.971426   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:02.971433   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:02.985539   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:02.985548   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:02.999861   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:02.999875   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:03.011396   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:03.011408   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:03.034603   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:03.034611   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:03.069909   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:03.069923   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:03.081324   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:03.081334   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:03.096449   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:03.096460   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:03.117758   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:03.117773   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:03.129801   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:03.129812   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:03.142422   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:03.142433   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:03.154289   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:03.154302   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:03.166236   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:03.166249   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:05.680112   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:06.776975   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:06.777180   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:06.799416   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:38:06.799517   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:06.814696   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:38:06.814765   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:06.826971   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:38:06.827045   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:06.837945   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:38:06.838008   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:06.855315   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:38:06.855382   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:06.866130   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:38:06.866189   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:06.875662   37047 logs.go:276] 0 containers: []
	W0513 17:38:06.875672   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:06.875723   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:06.886343   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:38:06.886361   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:06.886367   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:06.890791   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:38:06.890798   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:38:06.915534   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:38:06.915544   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:38:06.930231   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:38:06.930241   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:38:06.941751   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:38:06.941760   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:38:06.963177   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:06.963187   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:07.001065   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:38:07.001073   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:38:07.019091   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:38:07.019102   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:38:07.035718   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:38:07.035729   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:38:07.047493   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:38:07.047504   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:38:07.058852   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:38:07.058862   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:07.070726   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:07.070737   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:07.106848   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:38:07.106860   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:38:07.121529   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:38:07.121538   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:38:07.135615   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:38:07.135625   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:38:07.147223   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:38:07.147234   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:38:07.159353   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:07.159363   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:09.686071   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:10.682399   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:10.682697   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:10.716326   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:10.716455   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:10.735351   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:10.735467   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:10.750393   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:10.750469   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:10.767587   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:10.767659   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:10.778414   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:10.778486   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:10.789612   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:10.789683   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:10.799486   36897 logs.go:276] 0 containers: []
	W0513 17:38:10.799496   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:10.799550   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:10.809903   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:10.809920   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:10.809926   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:10.815021   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:10.815030   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:10.826128   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:10.826140   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:10.838680   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:10.838691   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:10.853331   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:10.853341   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:10.870699   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:10.870709   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:10.906516   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:10.906524   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:10.951785   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:10.951800   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:10.965822   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:10.965834   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:10.984843   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:10.984853   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:10.997948   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:10.997958   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:11.011158   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:11.011169   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:11.035526   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:11.035536   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:11.046553   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:11.046564   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:11.062136   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:11.062148   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:14.688363   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:14.688497   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:14.701125   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:38:14.701193   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:14.711984   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:38:14.712058   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:14.722189   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:38:14.722255   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:14.734070   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:38:14.734140   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:14.744599   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:38:14.744662   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:14.758748   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:38:14.758813   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:14.768640   37047 logs.go:276] 0 containers: []
	W0513 17:38:14.768658   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:14.768710   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:14.779264   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:38:14.779283   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:14.779288   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:14.802159   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:38:14.802166   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:38:14.826698   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:38:14.826708   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:38:14.845070   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:38:14.845082   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:38:14.869908   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:38:14.869920   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:38:14.884219   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:38:14.884230   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:14.895983   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:38:14.895993   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:38:14.912749   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:38:14.912761   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:38:14.928153   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:38:14.928164   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:38:14.939996   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:38:14.940007   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:38:14.953059   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:14.953069   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:14.991128   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:14.991143   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:14.995981   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:38:14.995986   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:38:15.010587   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:38:15.010597   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:38:15.021572   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:15.021583   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:15.057043   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:38:15.057057   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:38:15.068660   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:38:15.068671   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:38:13.576650   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:17.587940   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:18.578871   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:18.579116   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:18.605556   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:18.605660   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:18.623558   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:18.623651   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:18.637944   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:18.638021   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:18.649141   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:18.649208   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:18.659750   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:18.659812   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:18.669913   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:18.669982   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:18.680044   36897 logs.go:276] 0 containers: []
	W0513 17:38:18.680055   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:18.680112   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:18.690469   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:18.690487   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:18.690492   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:18.701793   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:18.701803   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:18.737096   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:18.737111   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:18.777935   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:18.777949   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:18.797152   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:18.797163   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:18.814288   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:18.814298   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:18.840126   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:18.840134   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:18.854174   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:18.854186   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:18.873188   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:18.873200   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:18.884203   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:18.884214   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:18.895807   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:18.895821   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:18.900183   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:18.900192   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:18.919388   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:18.919400   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:18.931111   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:18.931122   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:18.942788   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:18.942798   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:21.463788   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:22.590436   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:22.590692   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:22.617662   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:38:22.617783   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:22.635935   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:38:22.636011   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:22.649800   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:38:22.649868   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:22.661279   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:38:22.661349   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:22.671877   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:38:22.671937   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:22.682974   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:38:22.683033   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:22.693124   37047 logs.go:276] 0 containers: []
	W0513 17:38:22.693135   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:22.693193   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:22.703710   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:38:22.703730   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:22.703736   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:22.708320   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:38:22.708330   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:38:22.722244   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:22.722258   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:22.759188   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:38:22.759195   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:38:22.773009   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:38:22.773020   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:38:22.784512   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:38:22.784523   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:38:22.805237   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:38:22.805247   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:38:22.818925   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:38:22.818934   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:38:22.833966   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:38:22.833977   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:38:22.848826   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:22.848839   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:22.870828   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:38:22.870840   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:22.882528   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:22.882537   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:22.918182   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:38:22.918191   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:38:22.943252   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:38:22.943263   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:38:22.961924   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:38:22.961936   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:38:22.974134   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:38:22.974145   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:38:22.992584   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:38:22.992594   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:38:25.505988   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:26.464051   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:26.464228   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:26.475534   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:26.475609   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:30.508186   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:30.508357   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:30.526829   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:38:30.526927   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:30.541663   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:38:30.541739   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:30.554275   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:38:30.554344   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:30.564979   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:38:30.565052   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:30.576901   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:38:30.576966   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:30.587220   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:38:30.587284   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:30.597762   37047 logs.go:276] 0 containers: []
	W0513 17:38:30.597773   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:30.597827   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:30.608365   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:38:30.608382   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:30.608389   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:30.646449   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:38:30.646459   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:38:30.660878   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:38:30.660891   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:38:30.673218   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:38:30.673228   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:38:30.688189   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:38:30.688205   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:38:30.707206   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:38:30.707221   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:38:30.721364   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:30.721379   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:30.755911   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:38:30.755925   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:38:30.769759   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:38:30.769770   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:38:30.780792   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:38:30.780803   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:38:30.792283   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:38:30.792294   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:30.804273   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:30.804283   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:30.808213   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:38:30.808222   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:38:30.832924   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:38:30.832933   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:38:30.845308   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:38:30.845322   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:38:30.866407   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:38:30.866418   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:38:30.877656   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:30.877666   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:26.486276   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:26.486345   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:26.496880   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:26.496946   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:26.508610   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:26.508681   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:26.518833   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:26.518899   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:26.534881   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:26.534952   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:26.545367   36897 logs.go:276] 0 containers: []
	W0513 17:38:26.545377   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:26.545429   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:26.556110   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:26.556125   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:26.556131   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:26.567429   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:26.567439   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:26.581979   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:26.581991   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:26.596490   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:26.596502   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:26.620351   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:26.620360   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:26.655388   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:26.655399   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:26.670552   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:26.670564   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:26.682174   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:26.682184   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:26.717173   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:26.717182   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:26.740677   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:26.740689   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:26.758403   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:26.758414   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:26.776734   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:26.776745   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:26.793848   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:26.793857   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:26.805824   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:26.805836   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:26.811081   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:26.811087   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:29.324820   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:33.403212   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:34.327126   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:34.327326   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:34.352612   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:34.352685   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:34.363862   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:34.363929   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:34.374926   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:34.375002   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:34.385525   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:34.385594   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:34.395869   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:34.395941   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:34.406552   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:34.406621   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:34.421221   36897 logs.go:276] 0 containers: []
	W0513 17:38:34.421232   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:34.421289   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:34.431469   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:34.431485   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:34.431492   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:34.435943   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:34.435953   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:34.470138   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:34.470151   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:34.488212   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:34.488226   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:34.521058   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:34.521067   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:34.532293   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:34.532304   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:34.543772   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:34.543784   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:34.555345   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:34.555358   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:34.566762   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:34.566773   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:34.579205   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:34.579217   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:34.593244   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:34.593255   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:34.607714   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:34.607724   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:34.618781   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:34.618790   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:34.636357   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:34.636368   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:34.661177   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:34.661187   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:38.404678   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:38.404794   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:38.432716   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:38:38.432790   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:38.448942   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:38:38.449011   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:38.459694   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:38:38.459775   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:38.470670   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:38:38.470730   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:38.480793   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:38:38.480849   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:38.491215   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:38:38.491279   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:38.501503   37047 logs.go:276] 0 containers: []
	W0513 17:38:38.501514   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:38.501593   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:38.512480   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:38:38.512501   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:38.512507   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:38.516635   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:38.516642   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:38.550829   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:38:38.550841   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:38:38.565527   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:38:38.565537   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:38:38.576514   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:38:38.576525   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:38:38.593800   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:38.593812   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:38.615854   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:38:38.615864   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:38:38.641416   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:38:38.641430   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:38:38.661421   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:38:38.661431   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:38:38.672723   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:38:38.672733   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:38.686042   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:38:38.686052   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:38:38.697763   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:38:38.697772   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:38:38.719843   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:38:38.719852   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:38:38.731748   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:38.731758   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:38.768153   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:38:38.768161   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:38:38.781923   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:38:38.781938   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:38:38.803807   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:38:38.803819   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:38:37.175296   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:41.317194   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:42.177499   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:42.177718   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:42.199428   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:42.199516   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:42.215044   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:42.215118   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:42.227570   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:42.227636   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:42.238256   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:42.238320   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:42.257021   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:42.257086   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:42.267484   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:42.267548   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:42.278221   36897 logs.go:276] 0 containers: []
	W0513 17:38:42.278232   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:42.278288   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:42.289598   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:42.289613   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:42.289619   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:42.296709   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:42.296717   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:42.313478   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:42.313486   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:42.338280   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:42.338288   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:42.373269   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:42.373281   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:42.387344   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:42.387354   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:42.402396   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:42.402406   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:42.416824   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:42.416834   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:42.428577   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:42.428588   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:42.463464   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:42.463474   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:42.487021   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:42.487030   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:42.498857   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:42.498865   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:42.510835   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:42.510849   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:42.522591   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:42.522602   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:42.535201   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:42.535214   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:45.049201   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:46.319308   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:46.319503   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:46.337685   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:38:46.337768   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:46.351217   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:38:46.351304   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:46.363954   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:38:46.364015   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:46.374934   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:38:46.375006   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:46.395495   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:38:46.395563   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:46.408515   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:38:46.408590   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:46.423458   37047 logs.go:276] 0 containers: []
	W0513 17:38:46.423469   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:46.423526   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:46.434072   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:38:46.434089   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:38:46.434094   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:38:46.459251   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:38:46.459267   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:38:46.471459   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:46.471472   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:46.475455   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:46.475463   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:46.515367   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:38:46.515379   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:38:46.529559   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:38:46.529569   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:38:46.541005   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:38:46.541017   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:38:46.557946   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:46.557958   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:46.579953   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:38:46.579960   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:38:46.591409   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:38:46.591420   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:38:46.606481   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:46.606491   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:46.643009   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:38:46.643017   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:38:46.657421   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:38:46.657431   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:38:46.673136   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:38:46.673146   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:38:46.685110   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:38:46.685120   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:38:46.704081   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:38:46.704091   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:38:46.715919   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:38:46.715931   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:49.231095   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:50.051487   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:50.051624   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:50.067661   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:50.067730   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:50.077821   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:50.077876   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:50.088843   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:50.088913   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:50.099234   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:50.099303   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:50.109940   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:50.110005   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:50.120741   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:50.120805   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:50.131204   36897 logs.go:276] 0 containers: []
	W0513 17:38:50.131215   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:50.131262   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:50.141703   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:50.141719   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:50.141725   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:50.154059   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:50.154069   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:50.177758   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:50.177769   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:50.201588   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:50.201597   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:50.237165   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:50.237179   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:50.256059   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:50.256069   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:50.267727   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:50.267739   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:50.279431   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:50.279443   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:50.294613   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:50.294628   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:50.329371   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:50.329379   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:50.343673   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:50.343683   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:50.355111   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:50.355121   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:50.359755   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:50.359764   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:50.371118   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:50.371128   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:50.383070   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:50.383080   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:54.232605   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:54.232694   37047 kubeadm.go:591] duration metric: took 4m3.722933708s to restartPrimaryControlPlane
	W0513 17:38:54.232757   37047 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0513 17:38:54.232783   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0513 17:38:55.293976   37047 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.061202041s)
	I0513 17:38:55.294044   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 17:38:55.299254   37047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0513 17:38:55.301997   37047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0513 17:38:55.305121   37047 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0513 17:38:55.305127   37047 kubeadm.go:156] found existing configuration files:
	
	I0513 17:38:55.305148   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/admin.conf
	I0513 17:38:55.308321   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0513 17:38:55.308349   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0513 17:38:55.311030   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/kubelet.conf
	I0513 17:38:55.313423   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0513 17:38:55.313443   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0513 17:38:55.316618   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/controller-manager.conf
	I0513 17:38:55.319613   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0513 17:38:55.319639   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0513 17:38:55.322037   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/scheduler.conf
	I0513 17:38:55.324823   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0513 17:38:55.324844   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0513 17:38:55.327834   37047 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0513 17:38:55.344975   37047 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0513 17:38:55.345001   37047 kubeadm.go:309] [preflight] Running pre-flight checks
	I0513 17:38:55.399663   37047 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0513 17:38:55.399714   37047 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0513 17:38:55.399770   37047 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0513 17:38:55.447893   37047 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0513 17:38:55.451131   37047 out.go:204]   - Generating certificates and keys ...
	I0513 17:38:55.451170   37047 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0513 17:38:55.451207   37047 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0513 17:38:55.451251   37047 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0513 17:38:55.451286   37047 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0513 17:38:55.451323   37047 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0513 17:38:55.451357   37047 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0513 17:38:55.451390   37047 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0513 17:38:55.451419   37047 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0513 17:38:55.451456   37047 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0513 17:38:55.451493   37047 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0513 17:38:55.451510   37047 kubeadm.go:309] [certs] Using the existing "sa" key
	I0513 17:38:55.451538   37047 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0513 17:38:55.742390   37047 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0513 17:38:55.907583   37047 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0513 17:38:55.988389   37047 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0513 17:38:56.177727   37047 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0513 17:38:56.208755   37047 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0513 17:38:56.209068   37047 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0513 17:38:56.209131   37047 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0513 17:38:56.299179   37047 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0513 17:38:52.896434   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:56.307323   37047 out.go:204]   - Booting up control plane ...
	I0513 17:38:56.307376   37047 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0513 17:38:56.307429   37047 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0513 17:38:56.307465   37047 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0513 17:38:56.307521   37047 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0513 17:38:56.307626   37047 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0513 17:39:00.809009   37047 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504488 seconds
	I0513 17:39:00.809119   37047 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0513 17:39:00.812832   37047 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0513 17:38:57.897756   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:57.897848   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:57.909320   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:38:57.909390   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:57.920130   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:38:57.920197   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:57.931770   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:38:57.931838   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:57.942220   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:38:57.942284   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:57.954366   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:38:57.954436   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:57.965815   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:38:57.965881   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:57.976187   36897 logs.go:276] 0 containers: []
	W0513 17:38:57.976198   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:57.976253   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:57.987219   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:38:57.987240   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:38:57.987246   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:38:58.002232   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:38:58.002242   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:38:58.014705   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:38:58.014717   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:38:58.026846   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:38:58.026857   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:38:58.041277   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:38:58.041291   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:38:58.053041   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:38:58.053050   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:38:58.072769   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:58.072780   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:58.109484   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:58.109499   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:58.114352   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:58.114360   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:58.151545   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:38:58.151557   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:38:58.169526   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:38:58.169536   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:38:58.199514   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:38:58.199526   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:38:58.212182   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:58.212193   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:58.238131   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:38:58.238144   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:38:58.253400   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:38:58.253411   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:00.767663   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:01.324718   37047 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0513 17:39:01.324905   37047 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-201000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0513 17:39:01.830099   37047 kubeadm.go:309] [bootstrap-token] Using token: rm9hda.wbqm6wosqfjby2vj
	I0513 17:39:01.833827   37047 out.go:204]   - Configuring RBAC rules ...
	I0513 17:39:01.833884   37047 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0513 17:39:01.833931   37047 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0513 17:39:01.841369   37047 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0513 17:39:01.842234   37047 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0513 17:39:01.843115   37047 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0513 17:39:01.843894   37047 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0513 17:39:01.847045   37047 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0513 17:39:01.994264   37047 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0513 17:39:02.235119   37047 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0513 17:39:02.235129   37047 kubeadm.go:309] 
	I0513 17:39:02.235165   37047 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0513 17:39:02.235171   37047 kubeadm.go:309] 
	I0513 17:39:02.235208   37047 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0513 17:39:02.235255   37047 kubeadm.go:309] 
	I0513 17:39:02.235278   37047 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0513 17:39:02.235306   37047 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0513 17:39:02.235334   37047 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0513 17:39:02.235337   37047 kubeadm.go:309] 
	I0513 17:39:02.235365   37047 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0513 17:39:02.235368   37047 kubeadm.go:309] 
	I0513 17:39:02.235394   37047 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0513 17:39:02.235398   37047 kubeadm.go:309] 
	I0513 17:39:02.235424   37047 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0513 17:39:02.235459   37047 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0513 17:39:02.235497   37047 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0513 17:39:02.235505   37047 kubeadm.go:309] 
	I0513 17:39:02.235549   37047 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0513 17:39:02.235591   37047 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0513 17:39:02.235598   37047 kubeadm.go:309] 
	I0513 17:39:02.235646   37047 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token rm9hda.wbqm6wosqfjby2vj \
	I0513 17:39:02.235822   37047 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4d136b3dcf7c79b0a3d022a6d22a2cfa7847863b6538a3d210720b96945b0713 \
	I0513 17:39:02.235835   37047 kubeadm.go:309] 	--control-plane 
	I0513 17:39:02.235837   37047 kubeadm.go:309] 
	I0513 17:39:02.235876   37047 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0513 17:39:02.235882   37047 kubeadm.go:309] 
	I0513 17:39:02.235925   37047 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token rm9hda.wbqm6wosqfjby2vj \
	I0513 17:39:02.235992   37047 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4d136b3dcf7c79b0a3d022a6d22a2cfa7847863b6538a3d210720b96945b0713 
	I0513 17:39:02.236065   37047 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0513 17:39:02.236074   37047 cni.go:84] Creating CNI manager for ""
	I0513 17:39:02.236081   37047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:39:02.239657   37047 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0513 17:39:02.246657   37047 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0513 17:39:02.249478   37047 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0513 17:39:02.254214   37047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0513 17:39:02.254248   37047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 17:39:02.254270   37047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-201000 minikube.k8s.io/updated_at=2024_05_13T17_39_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=stopped-upgrade-201000 minikube.k8s.io/primary=true
	I0513 17:39:02.296264   37047 ops.go:34] apiserver oom_adj: -16
	I0513 17:39:02.296264   37047 kubeadm.go:1107] duration metric: took 42.045584ms to wait for elevateKubeSystemPrivileges
	W0513 17:39:02.296384   37047 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0513 17:39:02.296391   37047 kubeadm.go:393] duration metric: took 4m11.800065375s to StartCluster
	I0513 17:39:02.296401   37047 settings.go:142] acquiring lock: {Name:mk9ef358ebdddf34ee47447e0095ef8dc921e138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:39:02.296496   37047 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:39:02.296938   37047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/kubeconfig: {Name:mk4053cf25e56f4e4112583f80c31fb87a4c6322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:39:02.297142   37047 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:39:02.301493   37047 out.go:177] * Verifying Kubernetes components...
	I0513 17:39:02.297151   37047 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0513 17:39:02.297226   37047 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:39:02.309694   37047 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-201000"
	I0513 17:39:02.309700   37047 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-201000"
	I0513 17:39:02.309709   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:39:02.309717   37047 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-201000"
	W0513 17:39:02.309725   37047 addons.go:243] addon storage-provisioner should already be in state true
	I0513 17:39:02.309744   37047 host.go:66] Checking if "stopped-upgrade-201000" exists ...
	I0513 17:39:02.309717   37047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-201000"
	I0513 17:39:02.310188   37047 retry.go:31] will retry after 733.363226ms: connect: dial unix /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/monitor: connect: connection refused
	I0513 17:39:02.314656   37047 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:39:02.318729   37047 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 17:39:02.318735   37047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0513 17:39:02.318742   37047 sshutil.go:53] new ssh client: &{IP:localhost Port:56273 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/id_rsa Username:docker}
	I0513 17:39:02.400611   37047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 17:39:02.406953   37047 api_server.go:52] waiting for apiserver process to appear ...
	I0513 17:39:02.407021   37047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:39:02.410923   37047 api_server.go:72] duration metric: took 113.767875ms to wait for apiserver process to appear ...
	I0513 17:39:02.410930   37047 api_server.go:88] waiting for apiserver healthz status ...
	I0513 17:39:02.410938   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:02.467932   37047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 17:39:03.046602   37047 kapi.go:59] client config for stopped-upgrade-201000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/client.key", CAFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ca1e10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 17:39:03.046763   37047 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-201000"
	W0513 17:39:03.046769   37047 addons.go:243] addon default-storageclass should already be in state true
	I0513 17:39:03.046781   37047 host.go:66] Checking if "stopped-upgrade-201000" exists ...
	I0513 17:39:03.047449   37047 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0513 17:39:03.047454   37047 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0513 17:39:03.047460   37047 sshutil.go:53] new ssh client: &{IP:localhost Port:56273 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/id_rsa Username:docker}
	I0513 17:39:03.086288   37047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0513 17:39:05.769810   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:05.770051   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:05.792829   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:05.792946   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:05.808706   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:05.808780   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:05.820335   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:39:05.820410   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:05.830720   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:05.830786   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:05.841043   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:05.841115   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:05.851187   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:05.851254   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:05.861372   36897 logs.go:276] 0 containers: []
	W0513 17:39:05.861383   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:05.861447   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:05.879832   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:05.879851   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:05.879856   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:05.895607   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:05.895620   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:05.930004   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:05.930018   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:05.942697   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:39:05.942709   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:39:05.954630   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:05.954640   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:05.972498   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:05.972512   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:05.989487   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:05.989499   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:06.002294   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:06.002304   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:06.028583   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:06.028598   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:06.042979   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:39:06.042992   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:39:06.054763   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:06.054778   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:06.070089   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:06.070102   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:06.085416   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:06.085426   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:06.119066   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:06.119077   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:06.123510   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:06.123516   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:07.413085   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:07.413173   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:08.636549   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:12.413847   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:12.413889   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:13.638726   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:13.638845   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:13.650451   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:13.650520   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:13.661006   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:13.661080   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:13.671620   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:39:13.671691   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:13.689120   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:13.689183   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:13.699462   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:13.699528   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:13.710051   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:13.710112   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:13.720345   36897 logs.go:276] 0 containers: []
	W0513 17:39:13.720360   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:13.720414   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:13.731024   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:13.731042   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:13.731048   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:13.742710   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:39:13.742720   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:39:13.754536   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:13.754547   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:13.766216   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:13.766227   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:13.801758   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:13.801771   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:13.806982   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:13.806990   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:13.824430   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:13.824443   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:13.859004   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:13.859015   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:13.870944   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:13.870959   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:13.883369   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:13.883383   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:13.897648   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:13.897661   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:13.922303   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:13.922318   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:13.949934   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:13.949943   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:13.961642   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:39:13.961656   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:39:13.973457   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:13.973469   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:17.414332   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:17.414371   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:16.493113   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:22.415016   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:22.415066   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:21.495254   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:21.495482   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:21.518913   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:21.519019   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:21.534667   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:21.534741   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:21.547087   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:39:21.547151   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:21.560371   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:21.560438   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:21.571469   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:21.571532   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:21.582040   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:21.582102   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:21.592266   36897 logs.go:276] 0 containers: []
	W0513 17:39:21.592276   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:21.592331   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:21.603263   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:21.603278   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:21.603286   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:21.620799   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:21.620811   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:21.632729   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:21.632742   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:21.637296   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:21.637304   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:21.651713   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:39:21.651722   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:39:21.663400   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:21.663412   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:21.674912   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:21.674923   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:21.689237   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:21.689251   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:21.700573   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:21.700586   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:21.711827   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:39:21.711840   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:39:21.723654   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:21.723668   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:21.735502   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:21.735517   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:21.759882   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:21.759891   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:21.794275   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:21.794283   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:21.830686   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:21.830697   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:24.347870   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:27.415810   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:27.415853   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:29.349993   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:29.350107   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:29.362695   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:29.362772   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:29.374596   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:29.374667   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:29.385807   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:39:29.385882   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:29.396108   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:29.396174   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:29.406166   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:29.406228   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:29.416795   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:29.416867   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:29.426997   36897 logs.go:276] 0 containers: []
	W0513 17:39:29.427007   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:29.427058   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:29.437401   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:29.437420   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:29.437426   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:29.452160   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:29.452170   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:29.463984   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:29.463995   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:29.477528   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:29.477539   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:29.494810   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:29.494820   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:29.506500   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:29.506510   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:29.518581   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:29.518591   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:29.523450   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:29.523456   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:29.557819   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:29.557833   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:29.577278   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:29.577288   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:29.592145   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:29.592159   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:29.604706   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:29.604716   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:29.638488   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:39:29.638497   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:39:29.650379   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:39:29.650392   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:39:29.662446   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:29.662459   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:32.416881   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:32.416924   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0513 17:39:33.191932   37047 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0513 17:39:33.196903   37047 out.go:177] * Enabled addons: storage-provisioner
	I0513 17:39:33.207761   37047 addons.go:505] duration metric: took 30.911227292s for enable addons: enabled=[storage-provisioner]
	I0513 17:39:32.187614   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:37.418191   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:37.418209   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:37.189692   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:37.189795   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:37.201909   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:37.201980   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:37.212830   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:37.212895   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:37.223842   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:39:37.223923   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:37.234491   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:37.234558   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:37.245283   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:37.245346   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:37.255607   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:37.255675   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:37.266363   36897 logs.go:276] 0 containers: []
	W0513 17:39:37.266379   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:37.266429   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:37.277085   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:37.277101   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:37.277106   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:37.288600   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:37.288612   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:37.323633   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:37.323645   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:37.357684   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:37.357697   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:37.369649   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:39:37.369661   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:39:37.381378   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:37.381390   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:37.396186   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:37.396198   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:37.410745   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:37.410758   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:37.422468   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:39:37.422482   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:39:37.434025   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:37.434037   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:37.451276   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:37.451288   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:37.462915   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:37.462925   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:37.467605   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:37.467611   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:37.481908   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:37.481919   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:37.505769   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:37.505780   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:40.019890   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:42.419782   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:42.419825   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:45.022276   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:45.022555   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:45.050384   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:45.050501   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:45.072791   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:45.072865   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:45.088641   36897 logs.go:276] 4 containers: [c4d76732fd6b c87aaf9c9388 186d0aa98e14 8d8f0aa7f156]
	I0513 17:39:45.088710   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:45.100187   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:45.100247   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:45.110456   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:45.110520   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:45.121211   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:45.121271   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:45.131705   36897 logs.go:276] 0 containers: []
	W0513 17:39:45.131717   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:45.131768   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:45.142174   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:45.142188   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:45.142194   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:45.178384   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:45.178398   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:45.192601   36897 logs.go:123] Gathering logs for coredns [8d8f0aa7f156] ...
	I0513 17:39:45.192612   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d8f0aa7f156"
	I0513 17:39:45.204038   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:45.204050   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:45.222681   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:45.222691   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:45.235649   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:45.235663   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:45.250139   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:45.250152   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:45.262761   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:45.262774   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:45.274559   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:45.274569   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:45.289577   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:45.289591   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:45.316155   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:45.316166   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:45.328451   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:45.328462   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:45.363220   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:45.363231   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:45.367695   36897 logs.go:123] Gathering logs for coredns [186d0aa98e14] ...
	I0513 17:39:45.367702   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 186d0aa98e14"
	I0513 17:39:45.380075   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:45.380089   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:47.421895   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:47.421938   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:47.893899   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:52.424084   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:52.424115   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:52.896094   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:52.896354   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:39:52.917542   36897 logs.go:276] 1 containers: [aa1c5dc812f1]
	I0513 17:39:52.917667   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:39:52.933936   36897 logs.go:276] 1 containers: [a07ddce91dc1]
	I0513 17:39:52.934009   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:39:52.946164   36897 logs.go:276] 4 containers: [36bcfdf0b842 0f4c32511b6a c4d76732fd6b c87aaf9c9388]
	I0513 17:39:52.946234   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:39:52.958464   36897 logs.go:276] 1 containers: [414286afb194]
	I0513 17:39:52.958537   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:39:52.975031   36897 logs.go:276] 1 containers: [a648b4a5029b]
	I0513 17:39:52.975087   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:39:52.985463   36897 logs.go:276] 1 containers: [89cd42ea4006]
	I0513 17:39:52.985536   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:39:52.995316   36897 logs.go:276] 0 containers: []
	W0513 17:39:52.995327   36897 logs.go:278] No container was found matching "kindnet"
	I0513 17:39:52.995384   36897 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:39:53.005828   36897 logs.go:276] 1 containers: [4c0c749a5e72]
	I0513 17:39:53.005843   36897 logs.go:123] Gathering logs for kube-apiserver [aa1c5dc812f1] ...
	I0513 17:39:53.005849   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa1c5dc812f1"
	I0513 17:39:53.019820   36897 logs.go:123] Gathering logs for Docker ...
	I0513 17:39:53.019833   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:39:53.042392   36897 logs.go:123] Gathering logs for container status ...
	I0513 17:39:53.042400   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:39:53.053845   36897 logs.go:123] Gathering logs for coredns [c4d76732fd6b] ...
	I0513 17:39:53.053858   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d76732fd6b"
	I0513 17:39:53.073106   36897 logs.go:123] Gathering logs for coredns [c87aaf9c9388] ...
	I0513 17:39:53.073120   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87aaf9c9388"
	I0513 17:39:53.085055   36897 logs.go:123] Gathering logs for kube-controller-manager [89cd42ea4006] ...
	I0513 17:39:53.085066   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89cd42ea4006"
	I0513 17:39:53.102430   36897 logs.go:123] Gathering logs for storage-provisioner [4c0c749a5e72] ...
	I0513 17:39:53.102442   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c0c749a5e72"
	I0513 17:39:53.114488   36897 logs.go:123] Gathering logs for etcd [a07ddce91dc1] ...
	I0513 17:39:53.114498   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a07ddce91dc1"
	I0513 17:39:53.130359   36897 logs.go:123] Gathering logs for coredns [0f4c32511b6a] ...
	I0513 17:39:53.130371   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f4c32511b6a"
	I0513 17:39:53.148162   36897 logs.go:123] Gathering logs for kube-scheduler [414286afb194] ...
	I0513 17:39:53.148173   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 414286afb194"
	I0513 17:39:53.163111   36897 logs.go:123] Gathering logs for coredns [36bcfdf0b842] ...
	I0513 17:39:53.163123   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36bcfdf0b842"
	I0513 17:39:53.175324   36897 logs.go:123] Gathering logs for kube-proxy [a648b4a5029b] ...
	I0513 17:39:53.175338   36897 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a648b4a5029b"
	I0513 17:39:53.187743   36897 logs.go:123] Gathering logs for kubelet ...
	I0513 17:39:53.187752   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:39:53.222592   36897 logs.go:123] Gathering logs for dmesg ...
	I0513 17:39:53.222601   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:39:53.227297   36897 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:39:53.227303   36897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:39:55.764986   36897 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:40:00.767102   36897 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:40:00.771515   36897 out.go:177] 
	W0513 17:40:00.775520   36897 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0513 17:40:00.775527   36897 out.go:239] * 
	W0513 17:40:00.776071   36897 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:40:00.790370   36897 out.go:177] 
	I0513 17:39:57.426222   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:57.426267   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:40:02.428402   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:40:02.428546   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:40:02.443107   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:40:02.443190   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:40:02.455337   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:40:02.455411   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:40:02.465603   37047 logs.go:276] 2 containers: [c225f01c1531 eb9b3325d858]
	I0513 17:40:02.465683   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:40:02.475808   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:40:02.475872   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:40:02.486149   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:40:02.486219   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:40:02.496350   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:40:02.496422   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:40:02.506746   37047 logs.go:276] 0 containers: []
	W0513 17:40:02.506759   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:40:02.506832   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:40:02.517760   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:40:02.517776   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:40:02.517788   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:40:02.535096   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:40:02.535106   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:40:02.548063   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:40:02.548076   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:40:02.583576   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:40:02.583588   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:40:02.597915   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:40:02.597924   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:40:02.612349   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:40:02.612360   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:40:02.624382   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:40:02.624395   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:40:02.636081   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:40:02.636092   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:40:02.647883   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:40:02.647894   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:40:02.659820   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:40:02.659832   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:40:02.683399   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:40:02.683406   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:40:02.720280   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:40:02.720295   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:40:02.724646   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:40:02.724653   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:40:05.240708   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:40:10.242948   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:40:10.243122   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:40:10.254444   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:40:10.254511   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:40:10.265586   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:40:10.265658   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:40:10.276156   37047 logs.go:276] 2 containers: [c225f01c1531 eb9b3325d858]
	I0513 17:40:10.276222   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:40:10.287008   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:40:10.287066   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:40:10.300109   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:40:10.300168   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:40:10.310437   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:40:10.310491   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:40:10.320590   37047 logs.go:276] 0 containers: []
	W0513 17:40:10.320599   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:40:10.320643   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:40:10.332459   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:40:10.332473   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:40:10.332480   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:40:10.349681   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:40:10.349690   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:40:10.361329   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:40:10.361340   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:40:10.365552   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:40:10.365560   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:40:10.400320   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:40:10.400330   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:40:10.414399   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:40:10.414408   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:40:10.431783   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:40:10.431792   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:40:10.443427   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:40:10.443437   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:40:10.454784   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:40:10.454795   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:40:10.491835   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:40:10.491844   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:40:10.506160   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:40:10.506168   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:40:10.518146   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:40:10.518160   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:40:10.529456   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:40:10.529466   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:40:13.056015   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-05-14 00:30:47 UTC, ends at Tue 2024-05-14 00:40:16 UTC. --
	May 14 00:39:56 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:39:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 14 00:40:01 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:01Z" level=error msg="ContainerStats resp: {0x40008de2c0 linux}"
	May 14 00:40:01 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:01Z" level=error msg="ContainerStats resp: {0x40008df3c0 linux}"
	May 14 00:40:01 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 14 00:40:02 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:02Z" level=error msg="ContainerStats resp: {0x40004835c0 linux}"
	May 14 00:40:03 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:03Z" level=error msg="ContainerStats resp: {0x40005e2640 linux}"
	May 14 00:40:03 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:03Z" level=error msg="ContainerStats resp: {0x40005e31c0 linux}"
	May 14 00:40:03 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:03Z" level=error msg="ContainerStats resp: {0x40005e3740 linux}"
	May 14 00:40:03 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:03Z" level=error msg="ContainerStats resp: {0x40007caf40 linux}"
	May 14 00:40:03 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:03Z" level=error msg="ContainerStats resp: {0x4000934200 linux}"
	May 14 00:40:03 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:03Z" level=error msg="ContainerStats resp: {0x40007cb3c0 linux}"
	May 14 00:40:03 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:03Z" level=error msg="ContainerStats resp: {0x40007cb7c0 linux}"
	May 14 00:40:06 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:06Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 14 00:40:11 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:11Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 14 00:40:13 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:13Z" level=error msg="ContainerStats resp: {0x40007aad40 linux}"
	May 14 00:40:13 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:13Z" level=error msg="ContainerStats resp: {0x4000483900 linux}"
	May 14 00:40:14 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:14Z" level=error msg="ContainerStats resp: {0x40008f2040 linux}"
	May 14 00:40:15 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:15Z" level=error msg="ContainerStats resp: {0x40007ab480 linux}"
	May 14 00:40:15 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:15Z" level=error msg="ContainerStats resp: {0x40008f2e40 linux}"
	May 14 00:40:15 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:15Z" level=error msg="ContainerStats resp: {0x40008f3240 linux}"
	May 14 00:40:15 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:15Z" level=error msg="ContainerStats resp: {0x40005e2300 linux}"
	May 14 00:40:15 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:15Z" level=error msg="ContainerStats resp: {0x40005e2c40 linux}"
	May 14 00:40:15 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:15Z" level=error msg="ContainerStats resp: {0x40007ca400 linux}"
	May 14 00:40:15 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:15Z" level=error msg="ContainerStats resp: {0x40005e3bc0 linux}"
	May 14 00:40:16 running-upgrade-056000 cri-dockerd[3094]: time="2024-05-14T00:40:16Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	36bcfdf0b8421       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   8a2bda680647b
	0f4c32511b6ab       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   ca0181bf2e5cd
	c4d76732fd6bd       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   8a2bda680647b
	c87aaf9c93885       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   ca0181bf2e5cd
	a648b4a5029b4       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   9ba6b01414424
	4c0c749a5e722       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   ff61a1ba8a3da
	89cd42ea40066       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   55978504f4b9c
	aa1c5dc812f13       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   8885450ad437f
	a07ddce91dc1c       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   6e90bbbebcac5
	414286afb1943       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   d56d6b6ae3535
	
	
	==> coredns [0f4c32511b6a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4192455115535340943.316835467155302540. HINFO: read udp 10.244.0.3:47797->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4192455115535340943.316835467155302540. HINFO: read udp 10.244.0.3:42285->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4192455115535340943.316835467155302540. HINFO: read udp 10.244.0.3:45604->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4192455115535340943.316835467155302540. HINFO: read udp 10.244.0.3:45096->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4192455115535340943.316835467155302540. HINFO: read udp 10.244.0.3:52420->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4192455115535340943.316835467155302540. HINFO: read udp 10.244.0.3:44829->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4192455115535340943.316835467155302540. HINFO: read udp 10.244.0.3:52579->10.0.2.3:53: i/o timeout
	
	
	==> coredns [36bcfdf0b842] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4167088165638987165.5939949086431761303. HINFO: read udp 10.244.0.2:56249->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4167088165638987165.5939949086431761303. HINFO: read udp 10.244.0.2:42954->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4167088165638987165.5939949086431761303. HINFO: read udp 10.244.0.2:38164->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4167088165638987165.5939949086431761303. HINFO: read udp 10.244.0.2:55552->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4167088165638987165.5939949086431761303. HINFO: read udp 10.244.0.2:46422->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4167088165638987165.5939949086431761303. HINFO: read udp 10.244.0.2:41872->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4167088165638987165.5939949086431761303. HINFO: read udp 10.244.0.2:60333->10.0.2.3:53: i/o timeout
	
	
	==> coredns [c4d76732fd6b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7007268814036648733.7295093251743731285. HINFO: read udp 10.244.0.2:57374->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7007268814036648733.7295093251743731285. HINFO: read udp 10.244.0.2:57853->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7007268814036648733.7295093251743731285. HINFO: read udp 10.244.0.2:57371->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7007268814036648733.7295093251743731285. HINFO: read udp 10.244.0.2:45104->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7007268814036648733.7295093251743731285. HINFO: read udp 10.244.0.2:55010->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7007268814036648733.7295093251743731285. HINFO: read udp 10.244.0.2:52499->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7007268814036648733.7295093251743731285. HINFO: read udp 10.244.0.2:52772->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7007268814036648733.7295093251743731285. HINFO: read udp 10.244.0.2:42863->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7007268814036648733.7295093251743731285. HINFO: read udp 10.244.0.2:35811->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7007268814036648733.7295093251743731285. HINFO: read udp 10.244.0.2:38026->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c87aaf9c9388] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1973559088359893679.6576384852786882748. HINFO: read udp 10.244.0.3:52447->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1973559088359893679.6576384852786882748. HINFO: read udp 10.244.0.3:38907->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1973559088359893679.6576384852786882748. HINFO: read udp 10.244.0.3:41416->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1973559088359893679.6576384852786882748. HINFO: read udp 10.244.0.3:55503->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1973559088359893679.6576384852786882748. HINFO: read udp 10.244.0.3:57548->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1973559088359893679.6576384852786882748. HINFO: read udp 10.244.0.3:51805->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1973559088359893679.6576384852786882748. HINFO: read udp 10.244.0.3:38737->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1973559088359893679.6576384852786882748. HINFO: read udp 10.244.0.3:48421->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1973559088359893679.6576384852786882748. HINFO: read udp 10.244.0.3:34916->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1973559088359893679.6576384852786882748. HINFO: read udp 10.244.0.3:37471->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-056000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-056000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761
	                    minikube.k8s.io/name=running-upgrade-056000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_13T17_35_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 May 2024 00:35:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-056000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 May 2024 00:40:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 May 2024 00:35:59 +0000   Tue, 14 May 2024 00:35:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 May 2024 00:35:59 +0000   Tue, 14 May 2024 00:35:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 May 2024 00:35:59 +0000   Tue, 14 May 2024 00:35:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 May 2024 00:35:59 +0000   Tue, 14 May 2024 00:35:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-056000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 f58dbefe33924dab8d1a8e2450f87ef0
	  System UUID:                f58dbefe33924dab8d1a8e2450f87ef0
	  Boot ID:                    25b6c5b3-dadb-40c6-8199-2e4a5f3024a3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9wn4h                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-nzjfz                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-056000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-056000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-056000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-mtwzd                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-056000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m18s  kubelet          Node running-upgrade-056000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m18s  kubelet          Node running-upgrade-056000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s  kubelet          Node running-upgrade-056000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s  kubelet          Node running-upgrade-056000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m6s   node-controller  Node running-upgrade-056000 event: Registered Node running-upgrade-056000 in Controller
	
	
	==> dmesg <==
	[  +2.726983] kauditd_printk_skb: 14 callbacks suppressed
	[May14 00:31] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.069112] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.064303] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.203405] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.060133] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.893510] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[  +9.650012] systemd-fstab-generator[1959]: Ignoring "noauto" for root device
	[  +2.485861] systemd-fstab-generator[2238]: Ignoring "noauto" for root device
	[  +0.145938] systemd-fstab-generator[2274]: Ignoring "noauto" for root device
	[  +0.076418] systemd-fstab-generator[2285]: Ignoring "noauto" for root device
	[  +0.092454] systemd-fstab-generator[2304]: Ignoring "noauto" for root device
	[ +12.769802] kauditd_printk_skb: 86 callbacks suppressed
	[  +0.205498] systemd-fstab-generator[3048]: Ignoring "noauto" for root device
	[  +0.069797] systemd-fstab-generator[3062]: Ignoring "noauto" for root device
	[  +0.069239] systemd-fstab-generator[3073]: Ignoring "noauto" for root device
	[  +0.079979] systemd-fstab-generator[3087]: Ignoring "noauto" for root device
	[  +3.006908] systemd-fstab-generator[3237]: Ignoring "noauto" for root device
	[  +3.350241] systemd-fstab-generator[3801]: Ignoring "noauto" for root device
	[ +11.496746] systemd-fstab-generator[4177]: Ignoring "noauto" for root device
	[May14 00:32] kauditd_printk_skb: 68 callbacks suppressed
	[May14 00:35] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.573234] systemd-fstab-generator[12342]: Ignoring "noauto" for root device
	[  +5.645184] systemd-fstab-generator[12935]: Ignoring "noauto" for root device
	[  +0.457611] systemd-fstab-generator[13068]: Ignoring "noauto" for root device
	
	
	==> etcd [a07ddce91dc1] <==
	{"level":"info","ts":"2024-05-14T00:35:55.079Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-14T00:35:55.080Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-14T00:35:55.080Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-14T00:35:55.080Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-14T00:35:55.080Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-05-14T00:35:55.080Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-05-14T00:35:55.082Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-05-14T00:35:55.360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-14T00:35:55.360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-14T00:35:55.360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-05-14T00:35:55.360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-05-14T00:35:55.360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-14T00:35:55.360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-05-14T00:35:55.360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-14T00:35:55.360Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-14T00:35:55.364Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-14T00:35:55.364Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-14T00:35:55.364Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-14T00:35:55.364Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-056000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-14T00:35:55.364Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-14T00:35:55.365Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-05-14T00:35:55.365Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-14T00:35:55.365Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-14T00:35:55.368Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-14T00:35:55.368Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:40:17 up 9 min,  0 users,  load average: 0.24, 0.28, 0.18
	Linux running-upgrade-056000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [aa1c5dc812f1] <==
	I0514 00:35:57.082362       1 controller.go:611] quota admission added evaluator for: namespaces
	I0514 00:35:57.118750       1 cache.go:39] Caches are synced for autoregister controller
	I0514 00:35:57.118823       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0514 00:35:57.119230       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0514 00:35:57.119422       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0514 00:35:57.121594       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0514 00:35:57.121809       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0514 00:35:57.850715       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0514 00:35:58.038697       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0514 00:35:58.049213       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0514 00:35:58.049245       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0514 00:35:58.193537       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0514 00:35:58.205666       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0514 00:35:58.286275       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0514 00:35:58.288123       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0514 00:35:58.288503       1 controller.go:611] quota admission added evaluator for: endpoints
	I0514 00:35:58.289702       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0514 00:35:59.164203       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0514 00:35:59.737798       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0514 00:35:59.740977       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0514 00:35:59.782707       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0514 00:35:59.790991       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0514 00:36:12.715815       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0514 00:36:12.917800       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0514 00:36:13.210233       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [89cd42ea4006] <==
	I0514 00:36:11.988415       1 shared_informer.go:262] Caches are synced for crt configmap
	I0514 00:36:11.993677       1 shared_informer.go:262] Caches are synced for namespace
	I0514 00:36:12.012971       1 shared_informer.go:262] Caches are synced for PVC protection
	I0514 00:36:12.012989       1 shared_informer.go:262] Caches are synced for daemon sets
	I0514 00:36:12.013098       1 shared_informer.go:262] Caches are synced for cronjob
	I0514 00:36:12.015283       1 shared_informer.go:262] Caches are synced for service account
	I0514 00:36:12.015288       1 shared_informer.go:262] Caches are synced for expand
	I0514 00:36:12.016323       1 shared_informer.go:262] Caches are synced for ephemeral
	I0514 00:36:12.017407       1 shared_informer.go:262] Caches are synced for TTL
	I0514 00:36:12.087066       1 shared_informer.go:262] Caches are synced for persistent volume
	I0514 00:36:12.180605       1 shared_informer.go:262] Caches are synced for disruption
	I0514 00:36:12.180689       1 disruption.go:371] Sending events to api server.
	I0514 00:36:12.190485       1 shared_informer.go:262] Caches are synced for endpoint
	I0514 00:36:12.217914       1 shared_informer.go:262] Caches are synced for resource quota
	I0514 00:36:12.218011       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0514 00:36:12.222178       1 shared_informer.go:262] Caches are synced for resource quota
	I0514 00:36:12.261456       1 shared_informer.go:262] Caches are synced for deployment
	I0514 00:36:12.265404       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0514 00:36:12.633939       1 shared_informer.go:262] Caches are synced for garbage collector
	I0514 00:36:12.663275       1 shared_informer.go:262] Caches are synced for garbage collector
	I0514 00:36:12.663335       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0514 00:36:12.718281       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mtwzd"
	I0514 00:36:12.922546       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0514 00:36:13.017091       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-nzjfz"
	I0514 00:36:13.019365       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-9wn4h"
	
	
	==> kube-proxy [a648b4a5029b] <==
	I0514 00:36:13.200106       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0514 00:36:13.200131       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0514 00:36:13.200139       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0514 00:36:13.208485       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0514 00:36:13.208496       1 server_others.go:206] "Using iptables Proxier"
	I0514 00:36:13.208520       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0514 00:36:13.208640       1 server.go:661] "Version info" version="v1.24.1"
	I0514 00:36:13.208669       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0514 00:36:13.208919       1 config.go:317] "Starting service config controller"
	I0514 00:36:13.208928       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0514 00:36:13.208936       1 config.go:226] "Starting endpoint slice config controller"
	I0514 00:36:13.208965       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0514 00:36:13.209249       1 config.go:444] "Starting node config controller"
	I0514 00:36:13.209270       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0514 00:36:13.309029       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0514 00:36:13.309046       1 shared_informer.go:262] Caches are synced for service config
	I0514 00:36:13.309450       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [414286afb194] <==
	W0514 00:35:57.079315       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0514 00:35:57.079339       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0514 00:35:57.079792       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0514 00:35:57.081096       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0514 00:35:57.079886       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0514 00:35:57.081618       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0514 00:35:57.079927       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0514 00:35:57.079941       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0514 00:35:57.081910       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0514 00:35:57.079954       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0514 00:35:57.081924       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0514 00:35:57.080013       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0514 00:35:57.081940       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0514 00:35:57.080109       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0514 00:35:57.081952       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0514 00:35:57.081662       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0514 00:35:57.924847       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0514 00:35:57.924899       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0514 00:35:58.048548       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0514 00:35:58.048772       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0514 00:35:58.060648       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0514 00:35:58.060675       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0514 00:35:58.089043       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0514 00:35:58.089066       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0514 00:35:58.676138       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-05-14 00:30:47 UTC, ends at Tue 2024-05-14 00:40:17 UTC. --
	May 14 00:36:01 running-upgrade-056000 kubelet[12941]: E0514 00:36:01.568093   12941 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-056000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-056000"
	May 14 00:36:01 running-upgrade-056000 kubelet[12941]: E0514 00:36:01.767871   12941 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-056000\" already exists" pod="kube-system/etcd-running-upgrade-056000"
	May 14 00:36:01 running-upgrade-056000 kubelet[12941]: I0514 00:36:01.967523   12941 request.go:601] Waited for 1.118651553s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	May 14 00:36:01 running-upgrade-056000 kubelet[12941]: E0514 00:36:01.971175   12941 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-056000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-056000"
	May 14 00:36:11 running-upgrade-056000 kubelet[12941]: I0514 00:36:11.970185   12941 topology_manager.go:200] "Topology Admit Handler"
	May 14 00:36:11 running-upgrade-056000 kubelet[12941]: I0514 00:36:11.977121   12941 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 14 00:36:11 running-upgrade-056000 kubelet[12941]: I0514 00:36:11.977628   12941 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 14 00:36:12 running-upgrade-056000 kubelet[12941]: I0514 00:36:12.077491   12941 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b562624c-b0af-4481-869d-c883ceeeb323-tmp\") pod \"storage-provisioner\" (UID: \"b562624c-b0af-4481-869d-c883ceeeb323\") " pod="kube-system/storage-provisioner"
	May 14 00:36:12 running-upgrade-056000 kubelet[12941]: I0514 00:36:12.077517   12941 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd929\" (UniqueName: \"kubernetes.io/projected/b562624c-b0af-4481-869d-c883ceeeb323-kube-api-access-dd929\") pod \"storage-provisioner\" (UID: \"b562624c-b0af-4481-869d-c883ceeeb323\") " pod="kube-system/storage-provisioner"
	May 14 00:36:12 running-upgrade-056000 kubelet[12941]: E0514 00:36:12.182513   12941 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	May 14 00:36:12 running-upgrade-056000 kubelet[12941]: E0514 00:36:12.182535   12941 projected.go:192] Error preparing data for projected volume kube-api-access-dd929 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	May 14 00:36:12 running-upgrade-056000 kubelet[12941]: E0514 00:36:12.182573   12941 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b562624c-b0af-4481-869d-c883ceeeb323-kube-api-access-dd929 podName:b562624c-b0af-4481-869d-c883ceeeb323 nodeName:}" failed. No retries permitted until 2024-05-14 00:36:12.682560447 +0000 UTC m=+12.954211434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dd929" (UniqueName: "kubernetes.io/projected/b562624c-b0af-4481-869d-c883ceeeb323-kube-api-access-dd929") pod "storage-provisioner" (UID: "b562624c-b0af-4481-869d-c883ceeeb323") : configmap "kube-root-ca.crt" not found
	May 14 00:36:12 running-upgrade-056000 kubelet[12941]: I0514 00:36:12.720365   12941 topology_manager.go:200] "Topology Admit Handler"
	May 14 00:36:12 running-upgrade-056000 kubelet[12941]: I0514 00:36:12.891290   12941 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e765c64-0b6f-43a6-916d-f111caf7aed0-lib-modules\") pod \"kube-proxy-mtwzd\" (UID: \"3e765c64-0b6f-43a6-916d-f111caf7aed0\") " pod="kube-system/kube-proxy-mtwzd"
	May 14 00:36:12 running-upgrade-056000 kubelet[12941]: I0514 00:36:12.891309   12941 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e765c64-0b6f-43a6-916d-f111caf7aed0-xtables-lock\") pod \"kube-proxy-mtwzd\" (UID: \"3e765c64-0b6f-43a6-916d-f111caf7aed0\") " pod="kube-system/kube-proxy-mtwzd"
	May 14 00:36:12 running-upgrade-056000 kubelet[12941]: I0514 00:36:12.891320   12941 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e765c64-0b6f-43a6-916d-f111caf7aed0-kube-proxy\") pod \"kube-proxy-mtwzd\" (UID: \"3e765c64-0b6f-43a6-916d-f111caf7aed0\") " pod="kube-system/kube-proxy-mtwzd"
	May 14 00:36:12 running-upgrade-056000 kubelet[12941]: I0514 00:36:12.891332   12941 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86g6n\" (UniqueName: \"kubernetes.io/projected/3e765c64-0b6f-43a6-916d-f111caf7aed0-kube-api-access-86g6n\") pod \"kube-proxy-mtwzd\" (UID: \"3e765c64-0b6f-43a6-916d-f111caf7aed0\") " pod="kube-system/kube-proxy-mtwzd"
	May 14 00:36:13 running-upgrade-056000 kubelet[12941]: I0514 00:36:13.022674   12941 topology_manager.go:200] "Topology Admit Handler"
	May 14 00:36:13 running-upgrade-056000 kubelet[12941]: I0514 00:36:13.024446   12941 topology_manager.go:200] "Topology Admit Handler"
	May 14 00:36:13 running-upgrade-056000 kubelet[12941]: I0514 00:36:13.192861   12941 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wc88\" (UniqueName: \"kubernetes.io/projected/df0a97d7-d0a4-45a5-9a21-47725bbd5e8a-kube-api-access-9wc88\") pod \"coredns-6d4b75cb6d-nzjfz\" (UID: \"df0a97d7-d0a4-45a5-9a21-47725bbd5e8a\") " pod="kube-system/coredns-6d4b75cb6d-nzjfz"
	May 14 00:36:13 running-upgrade-056000 kubelet[12941]: I0514 00:36:13.192887   12941 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0a97d7-d0a4-45a5-9a21-47725bbd5e8a-config-volume\") pod \"coredns-6d4b75cb6d-nzjfz\" (UID: \"df0a97d7-d0a4-45a5-9a21-47725bbd5e8a\") " pod="kube-system/coredns-6d4b75cb6d-nzjfz"
	May 14 00:36:13 running-upgrade-056000 kubelet[12941]: I0514 00:36:13.192900   12941 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5s4j\" (UniqueName: \"kubernetes.io/projected/944e69b1-e5f0-4cf4-b318-266d48a32e64-kube-api-access-z5s4j\") pod \"coredns-6d4b75cb6d-9wn4h\" (UID: \"944e69b1-e5f0-4cf4-b318-266d48a32e64\") " pod="kube-system/coredns-6d4b75cb6d-9wn4h"
	May 14 00:36:13 running-upgrade-056000 kubelet[12941]: I0514 00:36:13.192910   12941 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/944e69b1-e5f0-4cf4-b318-266d48a32e64-config-volume\") pod \"coredns-6d4b75cb6d-9wn4h\" (UID: \"944e69b1-e5f0-4cf4-b318-266d48a32e64\") " pod="kube-system/coredns-6d4b75cb6d-9wn4h"
	May 14 00:39:52 running-upgrade-056000 kubelet[12941]: I0514 00:39:52.116621   12941 scope.go:110] "RemoveContainer" containerID="8d8f0aa7f15607c586c3465fc3446d93ec579c1c14abd9c3074e9d9e74208be3"
	May 14 00:39:52 running-upgrade-056000 kubelet[12941]: I0514 00:39:52.149148   12941 scope.go:110] "RemoveContainer" containerID="186d0aa98e140c63d6d3da52986d0fde2a6ccf809a354bdb1bf5df88c0f77bc1"
	
	
	==> storage-provisioner [4c0c749a5e72] <==
	I0514 00:36:13.079904       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0514 00:36:13.090355       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0514 00:36:13.090391       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0514 00:36:13.097734       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0514 00:36:13.097871       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-056000_8c574f2f-5b8e-4d85-a574-d6f7465683a2!
	I0514 00:36:13.098095       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"488329a3-5b66-46cc-989c-50b53c48621e", APIVersion:"v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-056000_8c574f2f-5b8e-4d85-a574-d6f7465683a2 became leader
	I0514 00:36:13.198184       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-056000_8c574f2f-5b8e-4d85-a574-d6f7465683a2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-056000 -n running-upgrade-056000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-056000 -n running-upgrade-056000: exit status 2 (15.652412375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-056000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-056000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-056000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-056000: (1.216681s)
--- FAIL: TestRunningBinaryUpgrade (615.68s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-549000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-549000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.894286167s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-549000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-549000" primary control-plane node in "kubernetes-upgrade-549000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-549000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:33:18.868977   36972 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:33:18.869113   36972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:33:18.869115   36972 out.go:304] Setting ErrFile to fd 2...
	I0513 17:33:18.869118   36972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:33:18.869230   36972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:33:18.870283   36972 out.go:298] Setting JSON to false
	I0513 17:33:18.887001   36972 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27168,"bootTime":1715619630,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:33:18.887067   36972 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:33:18.891607   36972 out.go:177] * [kubernetes-upgrade-549000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:33:18.898551   36972 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:33:18.898630   36972 notify.go:220] Checking for updates...
	I0513 17:33:18.902501   36972 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:33:18.905485   36972 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:33:18.908523   36972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:33:18.911502   36972 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:33:18.914406   36972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:33:18.917878   36972 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:33:18.917943   36972 config.go:182] Loaded profile config "running-upgrade-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:33:18.917994   36972 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:33:18.922542   36972 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:33:18.929545   36972 start.go:297] selected driver: qemu2
	I0513 17:33:18.929556   36972 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:33:18.929565   36972 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:33:18.931795   36972 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:33:18.934482   36972 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:33:18.935840   36972 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0513 17:33:18.935863   36972 cni.go:84] Creating CNI manager for ""
	I0513 17:33:18.935871   36972 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0513 17:33:18.935907   36972 start.go:340] cluster config:
	{Name:kubernetes-upgrade-549000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:33:18.940154   36972 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:33:18.947554   36972 out.go:177] * Starting "kubernetes-upgrade-549000" primary control-plane node in "kubernetes-upgrade-549000" cluster
	I0513 17:33:18.951512   36972 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 17:33:18.951528   36972 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0513 17:33:18.951538   36972 cache.go:56] Caching tarball of preloaded images
	I0513 17:33:18.951597   36972 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:33:18.951602   36972 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0513 17:33:18.951655   36972 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/kubernetes-upgrade-549000/config.json ...
	I0513 17:33:18.951665   36972 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/kubernetes-upgrade-549000/config.json: {Name:mkd67055035f15a88e4672a9bdd7a08bf495846c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:33:18.952011   36972 start.go:360] acquireMachinesLock for kubernetes-upgrade-549000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:33:18.952052   36972 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "kubernetes-upgrade-549000"
	I0513 17:33:18.952066   36972 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernet
es-upgrade-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:33:18.952092   36972 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:33:18.955573   36972 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:33:18.979985   36972 start.go:159] libmachine.API.Create for "kubernetes-upgrade-549000" (driver="qemu2")
	I0513 17:33:18.980012   36972 client.go:168] LocalClient.Create starting
	I0513 17:33:18.980082   36972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:33:18.980118   36972 main.go:141] libmachine: Decoding PEM data...
	I0513 17:33:18.980126   36972 main.go:141] libmachine: Parsing certificate...
	I0513 17:33:18.980161   36972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:33:18.980184   36972 main.go:141] libmachine: Decoding PEM data...
	I0513 17:33:18.980199   36972 main.go:141] libmachine: Parsing certificate...
	I0513 17:33:18.980550   36972 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:33:19.125077   36972 main.go:141] libmachine: Creating SSH key...
	I0513 17:33:19.243557   36972 main.go:141] libmachine: Creating Disk image...
	I0513 17:33:19.243566   36972 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:33:19.243769   36972 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2
	I0513 17:33:19.257101   36972 main.go:141] libmachine: STDOUT: 
	I0513 17:33:19.257124   36972 main.go:141] libmachine: STDERR: 
	I0513 17:33:19.257184   36972 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2 +20000M
	I0513 17:33:19.268504   36972 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:33:19.268533   36972 main.go:141] libmachine: STDERR: 
	I0513 17:33:19.268552   36972 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2
	I0513 17:33:19.268557   36972 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:33:19.268590   36972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:03:9a:84:17:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2
	I0513 17:33:19.270334   36972 main.go:141] libmachine: STDOUT: 
	I0513 17:33:19.270349   36972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:33:19.270368   36972 client.go:171] duration metric: took 290.35675ms to LocalClient.Create
	I0513 17:33:21.272534   36972 start.go:128] duration metric: took 2.320454292s to createHost
	I0513 17:33:21.272617   36972 start.go:83] releasing machines lock for "kubernetes-upgrade-549000", held for 2.320600917s
	W0513 17:33:21.272711   36972 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:33:21.277601   36972 out.go:177] * Deleting "kubernetes-upgrade-549000" in qemu2 ...
	W0513 17:33:21.313572   36972 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:33:21.313610   36972 start.go:728] Will try again in 5 seconds ...
	I0513 17:33:26.315770   36972 start.go:360] acquireMachinesLock for kubernetes-upgrade-549000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:33:26.316402   36972 start.go:364] duration metric: took 472.917µs to acquireMachinesLock for "kubernetes-upgrade-549000"
	I0513 17:33:26.316488   36972 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernet
es-upgrade-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:33:26.316739   36972 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:33:26.322412   36972 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:33:26.370759   36972 start.go:159] libmachine.API.Create for "kubernetes-upgrade-549000" (driver="qemu2")
	I0513 17:33:26.370821   36972 client.go:168] LocalClient.Create starting
	I0513 17:33:26.370939   36972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:33:26.370997   36972 main.go:141] libmachine: Decoding PEM data...
	I0513 17:33:26.371016   36972 main.go:141] libmachine: Parsing certificate...
	I0513 17:33:26.371071   36972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:33:26.371116   36972 main.go:141] libmachine: Decoding PEM data...
	I0513 17:33:26.371131   36972 main.go:141] libmachine: Parsing certificate...
	I0513 17:33:26.371757   36972 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:33:26.537427   36972 main.go:141] libmachine: Creating SSH key...
	I0513 17:33:26.660751   36972 main.go:141] libmachine: Creating Disk image...
	I0513 17:33:26.660757   36972 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:33:26.660976   36972 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2
	I0513 17:33:26.673860   36972 main.go:141] libmachine: STDOUT: 
	I0513 17:33:26.673883   36972 main.go:141] libmachine: STDERR: 
	I0513 17:33:26.673946   36972 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2 +20000M
	I0513 17:33:26.684786   36972 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:33:26.684802   36972 main.go:141] libmachine: STDERR: 
	I0513 17:33:26.684816   36972 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2
	I0513 17:33:26.684821   36972 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:33:26.684861   36972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:f6:7a:81:e5:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2
	I0513 17:33:26.686626   36972 main.go:141] libmachine: STDOUT: 
	I0513 17:33:26.686644   36972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:33:26.686656   36972 client.go:171] duration metric: took 315.837167ms to LocalClient.Create
	I0513 17:33:28.688900   36972 start.go:128] duration metric: took 2.372164917s to createHost
	I0513 17:33:28.689020   36972 start.go:83] releasing machines lock for "kubernetes-upgrade-549000", held for 2.372616209s
	W0513 17:33:28.689419   36972 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-549000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-549000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:33:28.701158   36972 out.go:177] 
	W0513 17:33:28.707277   36972 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:33:28.707316   36972 out.go:239] * 
	* 
	W0513 17:33:28.710085   36972 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:33:28.722144   36972 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-549000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-549000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-549000: (3.415288875s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-549000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-549000 status --format={{.Host}}: exit status 7 (57.926917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-549000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-549000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.181753708s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-549000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-549000" primary control-plane node in "kubernetes-upgrade-549000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-549000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-549000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:33:32.240871   37008 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:33:32.241001   37008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:33:32.241005   37008 out.go:304] Setting ErrFile to fd 2...
	I0513 17:33:32.241007   37008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:33:32.241128   37008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:33:32.242114   37008 out.go:298] Setting JSON to false
	I0513 17:33:32.258569   37008 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27182,"bootTime":1715619630,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:33:32.258635   37008 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:33:32.263158   37008 out.go:177] * [kubernetes-upgrade-549000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:33:32.270121   37008 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:33:32.270173   37008 notify.go:220] Checking for updates...
	I0513 17:33:32.274161   37008 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:33:32.277012   37008 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:33:32.280099   37008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:33:32.283097   37008 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:33:32.286017   37008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:33:32.289334   37008 config.go:182] Loaded profile config "kubernetes-upgrade-549000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0513 17:33:32.289595   37008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:33:32.294117   37008 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:33:32.301088   37008 start.go:297] selected driver: qemu2
	I0513 17:33:32.301097   37008 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-
upgrade-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:33:32.301158   37008 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:33:32.303326   37008 cni.go:84] Creating CNI manager for ""
	I0513 17:33:32.303345   37008 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:33:32.303369   37008 start.go:340] cluster config:
	{Name:kubernetes-upgrade-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client S
ocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:33:32.307305   37008 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:33:32.314065   37008 out.go:177] * Starting "kubernetes-upgrade-549000" primary control-plane node in "kubernetes-upgrade-549000" cluster
	I0513 17:33:32.318087   37008 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:33:32.318104   37008 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:33:32.318113   37008 cache.go:56] Caching tarball of preloaded images
	I0513 17:33:32.318169   37008 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:33:32.318174   37008 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:33:32.318242   37008 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/kubernetes-upgrade-549000/config.json ...
	I0513 17:33:32.318619   37008 start.go:360] acquireMachinesLock for kubernetes-upgrade-549000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:33:32.318644   37008 start.go:364] duration metric: took 19.667µs to acquireMachinesLock for "kubernetes-upgrade-549000"
	I0513 17:33:32.318653   37008 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:33:32.318657   37008 fix.go:54] fixHost starting: 
	I0513 17:33:32.318761   37008 fix.go:112] recreateIfNeeded on kubernetes-upgrade-549000: state=Stopped err=<nil>
	W0513 17:33:32.318768   37008 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:33:32.327015   37008 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-549000" ...
	I0513 17:33:32.331084   37008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:f6:7a:81:e5:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2
	I0513 17:33:32.332898   37008 main.go:141] libmachine: STDOUT: 
	I0513 17:33:32.332916   37008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:33:32.332942   37008 fix.go:56] duration metric: took 14.2845ms for fixHost
	I0513 17:33:32.332945   37008 start.go:83] releasing machines lock for "kubernetes-upgrade-549000", held for 14.297792ms
	W0513 17:33:32.332951   37008 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:33:32.332982   37008 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:33:32.332986   37008 start.go:728] Will try again in 5 seconds ...
	I0513 17:33:37.335122   37008 start.go:360] acquireMachinesLock for kubernetes-upgrade-549000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:33:37.335593   37008 start.go:364] duration metric: took 379.583µs to acquireMachinesLock for "kubernetes-upgrade-549000"
	I0513 17:33:37.335762   37008 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:33:37.335783   37008 fix.go:54] fixHost starting: 
	I0513 17:33:37.336527   37008 fix.go:112] recreateIfNeeded on kubernetes-upgrade-549000: state=Stopped err=<nil>
	W0513 17:33:37.336558   37008 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:33:37.344172   37008 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-549000" ...
	I0513 17:33:37.348305   37008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:f6:7a:81:e5:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubernetes-upgrade-549000/disk.qcow2
	I0513 17:33:37.358168   37008 main.go:141] libmachine: STDOUT: 
	I0513 17:33:37.358281   37008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:33:37.358377   37008 fix.go:56] duration metric: took 22.596083ms for fixHost
	I0513 17:33:37.358395   37008 start.go:83] releasing machines lock for "kubernetes-upgrade-549000", held for 22.778458ms
	W0513 17:33:37.358583   37008 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-549000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-549000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:33:37.365180   37008 out.go:177] 
	W0513 17:33:37.369240   37008 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:33:37.369266   37008 out.go:239] * 
	* 
	W0513 17:33:37.371333   37008 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:33:37.380003   37008 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-549000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-549000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-549000 version --output=json: exit status 1 (62.775542ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-549000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-05-13 17:33:37.457977 -0700 PDT m=+923.432253001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-549000 -n kubernetes-upgrade-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-549000 -n kubernetes-upgrade-549000: exit status 7 (31.403667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-549000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-549000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-549000
--- FAIL: TestKubernetesUpgrade (18.74s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=18872
- KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2456233483/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.38s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.96s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=18872
- KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3318919415/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (564.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.749087778 start -p stopped-upgrade-201000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.749087778 start -p stopped-upgrade-201000 --memory=2200 --vm-driver=qemu2 : (39.353013625s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.749087778 -p stopped-upgrade-201000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.749087778 -p stopped-upgrade-201000 stop: (3.102165084s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-201000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-201000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.39530425s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-201000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-201000" primary control-plane node in "stopped-upgrade-201000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-201000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:34:21.140929   37047 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:34:21.141091   37047 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:34:21.141095   37047 out.go:304] Setting ErrFile to fd 2...
	I0513 17:34:21.141098   37047 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:34:21.141239   37047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:34:21.142342   37047 out.go:298] Setting JSON to false
	I0513 17:34:21.160736   37047 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27231,"bootTime":1715619630,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:34:21.160799   37047 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:34:21.166019   37047 out.go:177] * [stopped-upgrade-201000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:34:21.173847   37047 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:34:21.178034   37047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:34:21.173880   37047 notify.go:220] Checking for updates...
	I0513 17:34:21.184012   37047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:34:21.186987   37047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:34:21.190014   37047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:34:21.192951   37047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:34:21.196281   37047 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:34:21.200042   37047 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0513 17:34:21.201335   37047 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:34:21.206030   37047 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:34:21.212855   37047 start.go:297] selected driver: qemu2
	I0513 17:34:21.212863   37047 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56308 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-201000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0513 17:34:21.212931   37047 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:34:21.215571   37047 cni.go:84] Creating CNI manager for ""
	I0513 17:34:21.215597   37047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:34:21.215630   37047 start.go:340] cluster config:
	{Name:stopped-upgrade-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56308 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-201000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0513 17:34:21.215703   37047 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:34:21.221949   37047 out.go:177] * Starting "stopped-upgrade-201000" primary control-plane node in "stopped-upgrade-201000" cluster
	I0513 17:34:21.226015   37047 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0513 17:34:21.226031   37047 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0513 17:34:21.226039   37047 cache.go:56] Caching tarball of preloaded images
	I0513 17:34:21.226092   37047 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:34:21.226097   37047 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0513 17:34:21.226141   37047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/config.json ...
	I0513 17:34:21.226578   37047 start.go:360] acquireMachinesLock for stopped-upgrade-201000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:34:21.226614   37047 start.go:364] duration metric: took 29.666µs to acquireMachinesLock for "stopped-upgrade-201000"
	I0513 17:34:21.226625   37047 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:34:21.226629   37047 fix.go:54] fixHost starting: 
	I0513 17:34:21.226738   37047 fix.go:112] recreateIfNeeded on stopped-upgrade-201000: state=Stopped err=<nil>
	W0513 17:34:21.226746   37047 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:34:21.231942   37047 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-201000" ...
	I0513 17:34:21.237684   37047 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/qemu.pid -nic user,model=virtio,hostfwd=tcp::56273-:22,hostfwd=tcp::56274-:2376,hostname=stopped-upgrade-201000 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/disk.qcow2
	I0513 17:34:21.281864   37047 main.go:141] libmachine: STDOUT: 
	I0513 17:34:21.281896   37047 main.go:141] libmachine: STDERR: 
	I0513 17:34:21.281901   37047 main.go:141] libmachine: Waiting for VM to start (ssh -p 56273 docker@127.0.0.1)...
	I0513 17:34:41.922922   37047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/config.json ...
	I0513 17:34:41.923746   37047 machine.go:94] provisionDockerMachine start ...
	I0513 17:34:41.923949   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:41.924525   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:41.924542   37047 main.go:141] libmachine: About to run SSH command:
	hostname
	I0513 17:34:42.011819   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0513 17:34:42.011853   37047 buildroot.go:166] provisioning hostname "stopped-upgrade-201000"
	I0513 17:34:42.011967   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:42.012215   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:42.012225   37047 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-201000 && echo "stopped-upgrade-201000" | sudo tee /etc/hostname
	I0513 17:34:42.088669   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-201000
	
	I0513 17:34:42.088734   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:42.088878   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:42.088889   37047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-201000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-201000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-201000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0513 17:34:42.156787   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0513 17:34:42.156800   37047 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18872-34554/.minikube CaCertPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18872-34554/.minikube}
	I0513 17:34:42.156808   37047 buildroot.go:174] setting up certificates
	I0513 17:34:42.156818   37047 provision.go:84] configureAuth start
	I0513 17:34:42.156822   37047 provision.go:143] copyHostCerts
	I0513 17:34:42.156906   37047 exec_runner.go:144] found /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.pem, removing ...
	I0513 17:34:42.156912   37047 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.pem
	I0513 17:34:42.157037   37047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.pem (1082 bytes)
	I0513 17:34:42.157222   37047 exec_runner.go:144] found /Users/jenkins/minikube-integration/18872-34554/.minikube/cert.pem, removing ...
	I0513 17:34:42.157227   37047 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18872-34554/.minikube/cert.pem
	I0513 17:34:42.157273   37047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18872-34554/.minikube/cert.pem (1123 bytes)
	I0513 17:34:42.157393   37047 exec_runner.go:144] found /Users/jenkins/minikube-integration/18872-34554/.minikube/key.pem, removing ...
	I0513 17:34:42.157396   37047 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18872-34554/.minikube/key.pem
	I0513 17:34:42.157439   37047 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18872-34554/.minikube/key.pem (1675 bytes)
	I0513 17:34:42.157533   37047 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-201000 san=[127.0.0.1 localhost minikube stopped-upgrade-201000]
	I0513 17:34:42.320293   37047 provision.go:177] copyRemoteCerts
	I0513 17:34:42.320338   37047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0513 17:34:42.320348   37047 sshutil.go:53] new ssh client: &{IP:localhost Port:56273 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/id_rsa Username:docker}
	I0513 17:34:42.356770   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0513 17:34:42.363712   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0513 17:34:42.370399   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0513 17:34:42.377184   37047 provision.go:87] duration metric: took 220.365625ms to configureAuth
	I0513 17:34:42.377194   37047 buildroot.go:189] setting minikube options for container-runtime
	I0513 17:34:42.377314   37047 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:34:42.377346   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:42.377433   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:42.377439   37047 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0513 17:34:42.441704   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0513 17:34:42.441712   37047 buildroot.go:70] root file system type: tmpfs
	I0513 17:34:42.441767   37047 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0513 17:34:42.441815   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:42.441913   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:42.441946   37047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0513 17:34:42.509707   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0513 17:34:42.509752   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:42.509859   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:42.509869   37047 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0513 17:34:42.863161   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0513 17:34:42.863175   37047 machine.go:97] duration metric: took 939.435917ms to provisionDockerMachine
	I0513 17:34:42.863182   37047 start.go:293] postStartSetup for "stopped-upgrade-201000" (driver="qemu2")
	I0513 17:34:42.863189   37047 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0513 17:34:42.863249   37047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0513 17:34:42.863258   37047 sshutil.go:53] new ssh client: &{IP:localhost Port:56273 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/id_rsa Username:docker}
	I0513 17:34:42.901682   37047 ssh_runner.go:195] Run: cat /etc/os-release
	I0513 17:34:42.903301   37047 info.go:137] Remote host: Buildroot 2021.02.12
	I0513 17:34:42.903317   37047 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18872-34554/.minikube/addons for local assets ...
	I0513 17:34:42.903404   37047 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18872-34554/.minikube/files for local assets ...
	I0513 17:34:42.903522   37047 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18872-34554/.minikube/files/etc/ssl/certs/350552.pem -> 350552.pem in /etc/ssl/certs
	I0513 17:34:42.903645   37047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0513 17:34:42.906197   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/files/etc/ssl/certs/350552.pem --> /etc/ssl/certs/350552.pem (1708 bytes)
	I0513 17:34:42.913253   37047 start.go:296] duration metric: took 50.066667ms for postStartSetup
	I0513 17:34:42.913266   37047 fix.go:56] duration metric: took 21.687070542s for fixHost
	I0513 17:34:42.913297   37047 main.go:141] libmachine: Using SSH client type: native
	I0513 17:34:42.913398   37047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100919dd0] 0x10091c630 <nil>  [] 0s} localhost 56273 <nil> <nil>}
	I0513 17:34:42.913405   37047 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0513 17:34:42.978591   37047 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715646882.637791838
	
	I0513 17:34:42.978600   37047 fix.go:216] guest clock: 1715646882.637791838
	I0513 17:34:42.978605   37047 fix.go:229] Guest: 2024-05-13 17:34:42.637791838 -0700 PDT Remote: 2024-05-13 17:34:42.913268 -0700 PDT m=+21.798858084 (delta=-275.476162ms)
	I0513 17:34:42.978616   37047 fix.go:200] guest clock delta is within tolerance: -275.476162ms
	I0513 17:34:42.978619   37047 start.go:83] releasing machines lock for "stopped-upgrade-201000", held for 21.752434666s
	I0513 17:34:42.978693   37047 ssh_runner.go:195] Run: cat /version.json
	I0513 17:34:42.978698   37047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0513 17:34:42.978702   37047 sshutil.go:53] new ssh client: &{IP:localhost Port:56273 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/id_rsa Username:docker}
	I0513 17:34:42.978716   37047 sshutil.go:53] new ssh client: &{IP:localhost Port:56273 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/id_rsa Username:docker}
	W0513 17:34:42.979335   37047 sshutil.go:64] dial failure (will retry): dial tcp [::1]:56273: connect: connection refused
	I0513 17:34:42.979357   37047 retry.go:31] will retry after 203.248018ms: dial tcp [::1]:56273: connect: connection refused
	W0513 17:34:43.225598   37047 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0513 17:34:43.225683   37047 ssh_runner.go:195] Run: systemctl --version
	I0513 17:34:43.228490   37047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0513 17:34:43.231012   37047 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0513 17:34:43.231047   37047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0513 17:34:43.235091   37047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0513 17:34:43.241247   37047 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0513 17:34:43.241260   37047 start.go:494] detecting cgroup driver to use...
	I0513 17:34:43.241352   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 17:34:43.249803   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0513 17:34:43.253651   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0513 17:34:43.257179   37047 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0513 17:34:43.257207   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0513 17:34:43.260653   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 17:34:43.263780   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0513 17:34:43.266638   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0513 17:34:43.269702   37047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0513 17:34:43.272977   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0513 17:34:43.276296   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0513 17:34:43.279254   37047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0513 17:34:43.281988   37047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0513 17:34:43.285006   37047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0513 17:34:43.287790   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:34:43.357489   37047 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0513 17:34:43.367919   37047 start.go:494] detecting cgroup driver to use...
	I0513 17:34:43.367996   37047 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0513 17:34:43.373953   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 17:34:43.379035   37047 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0513 17:34:43.388316   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0513 17:34:43.392516   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 17:34:43.397048   37047 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0513 17:34:43.465167   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0513 17:34:43.470970   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0513 17:34:43.476988   37047 ssh_runner.go:195] Run: which cri-dockerd
	I0513 17:34:43.478522   37047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0513 17:34:43.481506   37047 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0513 17:34:43.486782   37047 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0513 17:34:43.574564   37047 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0513 17:34:43.648321   37047 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0513 17:34:43.648383   37047 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0513 17:34:43.653419   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:34:43.736880   37047 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 17:34:44.900359   37047 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163487916s)
	I0513 17:34:44.900415   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0513 17:34:44.905015   37047 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0513 17:34:44.910980   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 17:34:44.915748   37047 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0513 17:34:44.993564   37047 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0513 17:34:45.065120   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:34:45.150063   37047 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0513 17:34:45.156209   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0513 17:34:45.160863   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:34:45.231219   37047 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0513 17:34:45.269819   37047 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0513 17:34:45.269889   37047 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0513 17:34:45.272400   37047 start.go:562] Will wait 60s for crictl version
	I0513 17:34:45.272453   37047 ssh_runner.go:195] Run: which crictl
	I0513 17:34:45.273786   37047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0513 17:34:45.289293   37047 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0513 17:34:45.289357   37047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 17:34:45.306024   37047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0513 17:34:45.332581   37047 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0513 17:34:45.332713   37047 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0513 17:34:45.333916   37047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 17:34:45.337662   37047 kubeadm.go:877] updating cluster {Name:stopped-upgrade-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56308 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-201000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0513 17:34:45.337705   37047 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0513 17:34:45.337745   37047 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 17:34:45.348431   37047 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 17:34:45.348439   37047 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0513 17:34:45.348485   37047 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 17:34:45.352203   37047 ssh_runner.go:195] Run: which lz4
	I0513 17:34:45.353454   37047 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0513 17:34:45.354612   37047 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0513 17:34:45.354621   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0513 17:34:46.016575   37047 docker.go:649] duration metric: took 663.164292ms to copy over tarball
	I0513 17:34:46.016648   37047 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0513 17:34:47.174827   37047 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.158184667s)
	I0513 17:34:47.174840   37047 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0513 17:34:47.190205   37047 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0513 17:34:47.193640   37047 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0513 17:34:47.198812   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:34:47.259910   37047 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0513 17:34:48.839936   37047 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.580039791s)
	I0513 17:34:48.840040   37047 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0513 17:34:48.853176   37047 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0513 17:34:48.853187   37047 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0513 17:34:48.853193   37047 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0513 17:34:48.860016   37047 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:34:48.860043   37047 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:34:48.860020   37047 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:34:48.860120   37047 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0513 17:34:48.860128   37047 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:34:48.860153   37047 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:34:48.860204   37047 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:34:48.860252   37047 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0513 17:34:48.868163   37047 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:34:48.868270   37047 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0513 17:34:48.868358   37047 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:34:48.868610   37047 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:34:48.869268   37047 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:34:48.869324   37047 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:34:48.869353   37047 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:34:48.869322   37047 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0513 17:34:49.294912   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0513 17:34:49.306007   37047 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0513 17:34:49.306046   37047 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0513 17:34:49.306104   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0513 17:34:49.309996   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:34:49.316870   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0513 17:34:49.317695   37047 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0513 17:34:49.328249   37047 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0513 17:34:49.328267   37047 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:34:49.328318   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0513 17:34:49.328332   37047 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0513 17:34:49.328348   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0513 17:34:49.338589   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:34:49.364183   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:34:49.365113   37047 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0513 17:34:49.365134   37047 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:34:49.365098   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0513 17:34:49.365171   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0513 17:34:49.381353   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:34:49.393119   37047 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0513 17:34:49.393147   37047 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:34:49.393209   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0513 17:34:49.413487   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0513 17:34:49.431775   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0513 17:34:49.434840   37047 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0513 17:34:49.434898   37047 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:34:49.434949   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0513 17:34:49.440386   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0513 17:34:49.444082   37047 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0513 17:34:49.444186   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:34:49.479697   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0513 17:34:49.493243   37047 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0513 17:34:49.493264   37047 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:34:49.493264   37047 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0513 17:34:49.493325   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0513 17:34:49.493326   37047 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0513 17:34:49.493350   37047 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0513 17:34:49.523748   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0513 17:34:49.523868   37047 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0513 17:34:49.548318   37047 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0513 17:34:49.548350   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0513 17:34:49.559211   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0513 17:34:49.559333   37047 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0513 17:34:49.581635   37047 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0513 17:34:49.581666   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0513 17:34:49.621643   37047 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0513 17:34:49.621656   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0513 17:34:49.650469   37047 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0513 17:34:49.650495   37047 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0513 17:34:49.650500   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0513 17:34:49.658418   37047 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0513 17:34:49.658518   37047 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:34:49.698681   37047 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0513 17:34:49.698705   37047 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0513 17:34:49.698711   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0513 17:34:49.698703   37047 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0513 17:34:49.698744   37047 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:34:49.698800   37047 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:34:49.853184   37047 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0513 17:34:49.853214   37047 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0513 17:34:49.853327   37047 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0513 17:34:49.854839   37047 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0513 17:34:49.854856   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0513 17:34:49.881623   37047 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0513 17:34:49.881636   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0513 17:34:50.124586   37047 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0513 17:34:50.124628   37047 cache_images.go:92] duration metric: took 1.271452958s to LoadCachedImages
	W0513 17:34:50.124674   37047 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0513 17:34:50.124680   37047 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0513 17:34:50.124731   37047 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-201000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-201000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0513 17:34:50.124802   37047 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0513 17:34:50.138900   37047 cni.go:84] Creating CNI manager for ""
	I0513 17:34:50.138912   37047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:34:50.138917   37047 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0513 17:34:50.138926   37047 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-201000 NodeName:stopped-upgrade-201000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0513 17:34:50.138995   37047 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-201000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0513 17:34:50.139047   37047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0513 17:34:50.142045   37047 binaries.go:44] Found k8s binaries, skipping transfer
	I0513 17:34:50.142074   37047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0513 17:34:50.145102   37047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0513 17:34:50.150154   37047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0513 17:34:50.155109   37047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0513 17:34:50.160236   37047 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0513 17:34:50.161358   37047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0513 17:34:50.165258   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:34:50.248584   37047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 17:34:50.255068   37047 certs.go:68] Setting up /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000 for IP: 10.0.2.15
	I0513 17:34:50.255079   37047 certs.go:194] generating shared ca certs ...
	I0513 17:34:50.255088   37047 certs.go:226] acquiring lock for ca certs: {Name:mk4bcf4fefcc4c80b8079c869e5ba8b057091109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:34:50.255244   37047 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.key
	I0513 17:34:50.255297   37047 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/proxy-client-ca.key
	I0513 17:34:50.255302   37047 certs.go:256] generating profile certs ...
	I0513 17:34:50.255384   37047 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/client.key
	I0513 17:34:50.255404   37047 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.key.968a75f6
	I0513 17:34:50.255415   37047 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.crt.968a75f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0513 17:34:50.371358   37047 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.crt.968a75f6 ...
	I0513 17:34:50.371370   37047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.crt.968a75f6: {Name:mk9cf29c2ea8736ae5d3a43c029c95bade14f03e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:34:50.371666   37047 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.key.968a75f6 ...
	I0513 17:34:50.371672   37047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.key.968a75f6: {Name:mkc10f4b7a2f9c8ff2776d724bc4cc0eb180933d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:34:50.371795   37047 certs.go:381] copying /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.crt.968a75f6 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.crt
	I0513 17:34:50.371938   37047 certs.go:385] copying /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.key.968a75f6 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.key
	I0513 17:34:50.372082   37047 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/proxy-client.key
	I0513 17:34:50.372215   37047 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/35055.pem (1338 bytes)
	W0513 17:34:50.372242   37047 certs.go:480] ignoring /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/35055_empty.pem, impossibly tiny 0 bytes
	I0513 17:34:50.372247   37047 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca-key.pem (1675 bytes)
	I0513 17:34:50.372266   37047 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem (1082 bytes)
	I0513 17:34:50.372289   37047 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem (1123 bytes)
	I0513 17:34:50.372306   37047 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/key.pem (1675 bytes)
	I0513 17:34:50.372345   37047 certs.go:484] found cert: /Users/jenkins/minikube-integration/18872-34554/.minikube/files/etc/ssl/certs/350552.pem (1708 bytes)
	I0513 17:34:50.372657   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0513 17:34:50.379734   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0513 17:34:50.387051   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0513 17:34:50.393553   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0513 17:34:50.401293   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0513 17:34:50.408741   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0513 17:34:50.416135   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0513 17:34:50.423981   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0513 17:34:50.431590   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/files/etc/ssl/certs/350552.pem --> /usr/share/ca-certificates/350552.pem (1708 bytes)
	I0513 17:34:50.438289   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0513 17:34:50.445294   37047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/35055.pem --> /usr/share/ca-certificates/35055.pem (1338 bytes)
	I0513 17:34:50.452419   37047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0513 17:34:50.457564   37047 ssh_runner.go:195] Run: openssl version
	I0513 17:34:50.459562   37047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0513 17:34:50.462414   37047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0513 17:34:50.463843   37047 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 14 00:31 /usr/share/ca-certificates/minikubeCA.pem
	I0513 17:34:50.463861   37047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0513 17:34:50.465497   37047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0513 17:34:50.468865   37047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/35055.pem && ln -fs /usr/share/ca-certificates/35055.pem /etc/ssl/certs/35055.pem"
	I0513 17:34:50.472202   37047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35055.pem
	I0513 17:34:50.473500   37047 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 14 00:19 /usr/share/ca-certificates/35055.pem
	I0513 17:34:50.473517   37047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35055.pem
	I0513 17:34:50.475324   37047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/35055.pem /etc/ssl/certs/51391683.0"
	I0513 17:34:50.478074   37047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/350552.pem && ln -fs /usr/share/ca-certificates/350552.pem /etc/ssl/certs/350552.pem"
	I0513 17:34:50.481471   37047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/350552.pem
	I0513 17:34:50.482924   37047 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 14 00:19 /usr/share/ca-certificates/350552.pem
	I0513 17:34:50.482956   37047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/350552.pem
	I0513 17:34:50.484665   37047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/350552.pem /etc/ssl/certs/3ec20f2e.0"
	I0513 17:34:50.487718   37047 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0513 17:34:50.489149   37047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0513 17:34:50.491428   37047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0513 17:34:50.493346   37047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0513 17:34:50.495678   37047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0513 17:34:50.497492   37047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0513 17:34:50.499427   37047 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0513 17:34:50.501358   37047 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:56308 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-201000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0513 17:34:50.501433   37047 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 17:34:50.511790   37047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0513 17:34:50.514612   37047 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0513 17:34:50.514621   37047 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0513 17:34:50.514624   37047 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0513 17:34:50.514647   37047 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0513 17:34:50.517586   37047 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0513 17:34:50.517891   37047 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-201000" does not appear in /Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:34:50.517993   37047 kubeconfig.go:62] /Users/jenkins/minikube-integration/18872-34554/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-201000" cluster setting kubeconfig missing "stopped-upgrade-201000" context setting]
	I0513 17:34:50.518199   37047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/kubeconfig: {Name:mk4053cf25e56f4e4112583f80c31fb87a4c6322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:34:50.518640   37047 kapi.go:59] client config for stopped-upgrade-201000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/client.key", CAFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ca1e10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 17:34:50.518968   37047 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0513 17:34:50.521636   37047 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-201000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0513 17:34:50.521640   37047 kubeadm.go:1154] stopping kube-system containers ...
	I0513 17:34:50.521680   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0513 17:34:50.532361   37047 docker.go:483] Stopping containers: [47dfe97c593d 2f96dad126c2 c06366361f20 b3d353a21008 efba4f55cfe3 ae8a30a7a109 95d64d777ab1 addde02f95eb]
	I0513 17:34:50.532428   37047 ssh_runner.go:195] Run: docker stop 47dfe97c593d 2f96dad126c2 c06366361f20 b3d353a21008 efba4f55cfe3 ae8a30a7a109 95d64d777ab1 addde02f95eb
	I0513 17:34:50.543011   37047 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0513 17:34:50.548472   37047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0513 17:34:50.551409   37047 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0513 17:34:50.551424   37047 kubeadm.go:156] found existing configuration files:
	
	I0513 17:34:50.551445   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/admin.conf
	I0513 17:34:50.553954   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0513 17:34:50.553981   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0513 17:34:50.556997   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/kubelet.conf
	I0513 17:34:50.559975   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0513 17:34:50.560015   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0513 17:34:50.562563   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/controller-manager.conf
	I0513 17:34:50.565300   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0513 17:34:50.565324   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0513 17:34:50.568298   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/scheduler.conf
	I0513 17:34:50.570901   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0513 17:34:50.570928   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0513 17:34:50.573549   37047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0513 17:34:50.576664   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:34:50.599223   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:34:51.081674   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:34:51.219257   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:34:51.244307   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0513 17:34:51.271313   37047 api_server.go:52] waiting for apiserver process to appear ...
	I0513 17:34:51.271390   37047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:34:51.772008   37047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:34:52.273427   37047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:34:52.277960   37047 api_server.go:72] duration metric: took 1.006669333s to wait for apiserver process to appear ...
	I0513 17:34:52.277969   37047 api_server.go:88] waiting for apiserver healthz status ...
	I0513 17:34:52.277983   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:34:57.280007   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:34:57.280045   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:02.280189   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:02.280229   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:07.280535   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:07.280584   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:12.281055   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:12.281089   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:17.281673   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:17.281733   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:22.282771   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:22.282836   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:27.284074   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:27.284099   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:32.285436   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:32.285487   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:37.287397   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:37.287416   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:42.289546   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:42.289595   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:47.291865   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:47.291904   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:35:52.294048   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:35:52.294172   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:35:52.305488   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:35:52.305560   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:35:52.318463   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:35:52.318549   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:35:52.329652   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:35:52.329726   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:35:52.341373   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:35:52.341456   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:35:52.352646   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:35:52.352729   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:35:52.364215   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:35:52.364317   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:35:52.375532   37047 logs.go:276] 0 containers: []
	W0513 17:35:52.375543   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:35:52.375606   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:35:52.387322   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:35:52.387342   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:35:52.387347   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:35:52.415155   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:35:52.415166   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:35:52.432935   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:35:52.432947   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:35:52.446185   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:35:52.446199   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:35:52.576742   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:35:52.576757   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:35:52.592737   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:35:52.592755   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:35:52.605088   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:35:52.605102   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:35:52.631847   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:35:52.631869   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:35:52.670713   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:35:52.670738   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:35:52.675281   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:35:52.675293   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:35:52.688441   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:35:52.688455   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:35:52.701758   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:35:52.701771   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:35:52.722518   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:35:52.722533   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:35:52.736967   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:35:52.736982   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:35:52.748568   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:35:52.748579   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:35:52.763899   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:35:52.763919   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:35:52.785644   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:35:52.785665   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:35:55.300356   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:00.302467   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:00.302567   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:00.314397   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:00.314472   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:00.332047   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:00.332118   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:00.348528   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:00.348620   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:00.359682   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:00.359749   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:00.370175   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:00.370246   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:00.381076   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:00.381145   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:00.392397   37047 logs.go:276] 0 containers: []
	W0513 17:36:00.392409   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:00.392465   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:00.405672   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:00.405692   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:00.405698   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:00.421945   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:00.421960   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:00.435022   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:00.435036   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:00.461177   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:00.461192   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:00.475795   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:00.475812   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:00.490568   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:00.490581   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:00.510508   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:00.510526   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:00.525512   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:00.525525   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:00.530449   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:00.530462   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:00.545574   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:00.545588   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:00.557111   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:00.557123   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:00.577302   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:00.577312   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:00.597257   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:00.597271   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:00.638041   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:00.638051   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:00.652532   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:00.652546   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:00.663874   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:00.663888   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:00.701760   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:00.701768   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:03.229584   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:08.231732   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:08.231868   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:08.243318   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:08.243397   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:08.254636   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:08.254719   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:08.265916   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:08.265988   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:08.276158   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:08.276231   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:08.286808   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:08.286881   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:08.300899   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:08.300966   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:08.311002   37047 logs.go:276] 0 containers: []
	W0513 17:36:08.311014   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:08.311074   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:08.321209   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:08.321230   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:08.321236   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:08.357749   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:08.357762   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:08.372086   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:08.372098   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:08.383559   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:08.383571   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:08.396230   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:08.396244   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:08.400192   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:08.400200   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:08.411754   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:08.411766   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:08.429087   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:08.429098   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:08.454414   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:08.454423   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:08.492458   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:08.492465   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:08.506479   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:08.506494   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:08.520481   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:08.520491   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:08.536231   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:08.536242   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:08.553132   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:08.553143   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:08.572825   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:08.572835   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:08.584425   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:08.584437   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:08.595876   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:08.595888   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:11.121969   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:16.124146   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:16.124257   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:16.135141   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:16.135224   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:16.146495   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:16.146574   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:16.156809   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:16.156875   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:16.168723   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:16.168799   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:16.179311   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:16.179382   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:16.190831   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:16.190898   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:16.201242   37047 logs.go:276] 0 containers: []
	W0513 17:36:16.201257   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:16.201314   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:16.211777   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:16.211794   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:16.211800   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:16.225425   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:16.225436   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:16.241507   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:16.241523   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:16.246075   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:16.246086   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:16.282681   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:16.282692   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:16.299775   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:16.299789   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:16.311938   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:16.311951   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:16.323408   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:16.323419   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:16.347524   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:16.347533   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:16.362059   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:16.362069   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:16.381355   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:16.381364   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:16.393472   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:16.393482   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:16.431599   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:16.431605   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:16.449162   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:16.449173   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:16.460512   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:16.460521   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:16.485638   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:16.485646   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:16.496941   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:16.496957   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:19.013756   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:24.016217   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:24.016383   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:24.026919   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:24.026990   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:24.037487   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:24.037577   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:24.054101   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:24.054171   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:24.064295   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:24.064372   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:24.078148   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:24.078213   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:24.089428   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:24.089495   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:24.099414   37047 logs.go:276] 0 containers: []
	W0513 17:36:24.099428   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:24.099483   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:24.110121   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:24.110140   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:24.110145   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:24.124598   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:24.124610   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:24.142509   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:24.142523   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:24.156545   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:24.156554   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:24.167989   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:24.168000   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:24.186125   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:24.186137   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:24.223024   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:24.223034   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:24.236668   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:24.236678   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:24.248513   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:24.248524   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:24.260478   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:24.260488   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:24.273814   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:24.273824   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:24.297691   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:24.297710   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:24.333274   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:24.333284   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:24.358241   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:24.358254   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:24.369761   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:24.369773   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:24.384064   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:24.384073   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:24.403977   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:24.403986   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:26.910341   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:31.912481   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:31.912598   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:31.924604   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:31.924677   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:31.935320   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:31.935410   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:31.945907   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:31.945974   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:31.956827   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:31.956900   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:31.966959   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:31.967033   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:31.977581   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:31.977645   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:31.988328   37047 logs.go:276] 0 containers: []
	W0513 17:36:31.988339   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:31.988396   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:31.998803   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:31.998822   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:31.998828   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:32.036978   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:32.036987   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:32.062654   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:32.062664   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:32.076942   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:32.076953   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:32.089008   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:32.089019   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:32.103556   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:32.103567   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:32.124904   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:32.124913   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:32.162193   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:32.162202   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:32.166806   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:32.166813   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:32.190078   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:32.190083   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:32.202335   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:32.202345   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:32.214718   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:32.214729   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:32.226306   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:32.226315   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:32.238427   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:32.238437   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:32.255466   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:32.255480   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:32.267253   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:32.267265   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:32.281110   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:32.281118   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:34.797701   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:39.798643   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:39.798755   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:39.810268   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:39.810345   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:39.821298   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:39.821367   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:39.832041   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:39.832106   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:39.842469   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:39.842547   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:39.853339   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:39.853409   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:39.864356   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:39.864425   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:39.874834   37047 logs.go:276] 0 containers: []
	W0513 17:36:39.874845   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:39.874903   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:39.885986   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:39.886004   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:39.886009   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:39.921825   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:39.921835   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:39.936418   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:39.936429   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:39.960522   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:39.960538   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:39.978186   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:39.978196   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:39.997808   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:39.997818   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:40.009201   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:40.009211   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:40.020601   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:40.020614   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:40.039097   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:40.039108   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:40.051383   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:40.051394   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:40.063373   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:40.063383   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:40.074371   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:40.074381   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:40.097648   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:40.097656   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:40.134661   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:40.134671   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:40.138726   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:40.138731   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:40.152795   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:40.152810   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:40.168152   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:40.168165   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:42.682111   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:47.684360   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:47.684549   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:47.710043   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:47.710140   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:47.724261   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:47.724344   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:47.736011   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:47.736081   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:47.747034   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:47.747116   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:47.757760   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:47.757829   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:47.768650   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:47.768719   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:47.779101   37047 logs.go:276] 0 containers: []
	W0513 17:36:47.779111   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:47.779170   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:47.789508   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:47.789525   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:47.789533   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:47.794730   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:47.794740   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:47.806754   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:47.806765   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:47.818704   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:47.818713   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:47.845806   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:47.845816   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:47.856539   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:47.856548   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:47.867697   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:47.867707   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:47.892651   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:47.892658   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:47.904357   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:47.904370   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:47.929257   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:47.929267   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:47.944115   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:47.944127   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:47.959416   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:47.959430   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:47.977001   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:47.977010   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:48.015704   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:48.015716   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:48.055476   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:48.055488   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:48.072302   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:48.072348   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:48.086457   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:48.086471   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:50.602856   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:36:55.605109   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:36:55.605302   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:36:55.628459   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:36:55.628578   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:36:55.644088   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:36:55.644161   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:36:55.657032   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:36:55.657090   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:36:55.667526   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:36:55.667597   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:36:55.677856   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:36:55.677935   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:36:55.693974   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:36:55.694040   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:36:55.704291   37047 logs.go:276] 0 containers: []
	W0513 17:36:55.704302   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:36:55.704357   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:36:55.714590   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:36:55.714607   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:36:55.714622   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:36:55.752495   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:36:55.752506   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:36:55.796371   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:36:55.796388   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:36:55.829672   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:36:55.829685   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:36:55.846032   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:36:55.846044   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:36:55.859292   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:36:55.859305   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:36:55.872834   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:36:55.872846   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:36:55.898626   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:36:55.898637   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:36:55.910388   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:36:55.910399   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:36:55.924729   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:36:55.924739   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:36:55.939186   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:36:55.939196   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:36:55.950738   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:36:55.950749   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:36:55.965387   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:36:55.965397   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:36:55.982683   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:36:55.982695   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:36:56.002721   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:36:56.002731   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:36:56.006998   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:36:56.007005   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:36:56.026090   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:36:56.026100   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:36:58.539733   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:03.542047   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:03.542279   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:03.569138   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:03.569244   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:03.585032   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:03.585115   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:03.597107   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:03.597177   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:03.607934   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:03.607999   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:03.618559   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:03.618629   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:03.629615   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:03.629677   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:03.639832   37047 logs.go:276] 0 containers: []
	W0513 17:37:03.639845   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:03.639895   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:03.650339   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:03.650355   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:03.650361   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:03.668513   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:03.668524   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:03.705040   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:03.705048   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:03.709001   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:03.709009   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:03.729390   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:03.729401   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:03.741016   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:03.741027   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:03.755214   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:03.755223   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:03.783310   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:03.783323   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:03.795081   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:03.795091   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:03.811776   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:03.811791   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:03.837438   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:03.837447   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:03.849019   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:03.849030   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:03.860800   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:03.860815   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:03.885193   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:03.885200   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:03.922388   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:03.922399   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:03.937802   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:03.937812   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:03.957293   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:03.957304   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:06.470498   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:11.472654   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:11.472788   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:11.485829   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:11.485903   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:11.496992   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:11.497060   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:11.508943   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:11.509010   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:11.519517   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:11.519583   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:11.529841   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:11.529908   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:11.541127   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:11.541199   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:11.551632   37047 logs.go:276] 0 containers: []
	W0513 17:37:11.551643   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:11.551695   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:11.562246   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:11.562265   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:11.562271   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:11.600381   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:11.600392   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:11.604985   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:11.604990   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:11.615894   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:11.615905   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:11.627659   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:11.627671   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:11.643032   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:11.643046   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:11.654032   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:11.654043   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:11.666177   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:11.666187   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:11.701446   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:11.701460   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:11.726283   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:11.726294   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:11.740603   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:11.740613   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:11.752398   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:11.752407   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:11.771469   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:11.771479   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:11.783056   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:11.783069   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:11.797114   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:11.797125   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:11.811876   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:11.811885   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:11.829557   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:11.829569   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:14.354400   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:19.356560   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:19.356771   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:19.374848   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:19.374932   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:19.388019   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:19.388089   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:19.399922   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:19.399988   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:19.410323   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:19.410390   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:19.420756   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:19.420815   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:19.431298   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:19.431372   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:19.441079   37047 logs.go:276] 0 containers: []
	W0513 17:37:19.441091   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:19.441150   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:19.451489   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:19.451509   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:19.451514   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:19.465197   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:19.465207   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:19.476741   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:19.476752   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:19.501125   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:19.501134   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:19.512712   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:19.512723   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:19.526947   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:19.526956   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:19.541973   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:19.541984   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:19.560232   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:19.560244   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:19.572024   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:19.572036   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:19.583471   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:19.583482   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:19.594732   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:19.594742   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:19.598651   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:19.598659   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:19.622632   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:19.622643   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:19.637471   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:19.637482   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:19.655041   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:19.655054   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:19.692845   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:19.692852   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:19.728215   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:19.728225   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:22.248727   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:27.251019   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:27.251172   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:27.263735   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:27.263812   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:27.274700   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:27.274769   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:27.285029   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:27.285097   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:27.295848   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:27.295916   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:27.306120   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:27.306197   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:27.316479   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:27.316543   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:27.326488   37047 logs.go:276] 0 containers: []
	W0513 17:37:27.326497   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:27.326552   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:27.337067   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:27.337085   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:27.337091   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:27.360310   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:27.360319   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:27.397906   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:27.397920   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:27.412843   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:27.412857   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:27.438676   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:27.438685   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:27.458009   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:27.458018   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:27.468966   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:27.468978   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:27.482988   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:27.482997   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:27.494539   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:27.494551   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:27.505862   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:27.505873   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:27.540852   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:27.540862   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:27.555341   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:27.555351   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:27.566549   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:27.566574   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:27.583461   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:27.583471   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:27.587735   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:27.587740   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:27.599334   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:27.599344   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:27.614969   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:27.614979   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:30.128460   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:35.128863   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:35.129008   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:35.149967   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:35.150057   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:35.162291   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:35.162363   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:35.173352   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:35.173418   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:35.183742   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:35.183814   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:35.194725   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:35.194792   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:35.205546   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:35.205606   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:35.215601   37047 logs.go:276] 0 containers: []
	W0513 17:37:35.215611   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:35.215669   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:35.226174   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:35.226191   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:35.226196   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:35.240146   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:35.240156   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:35.251486   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:35.251498   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:35.290011   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:35.290021   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:35.304975   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:35.304986   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:35.316808   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:35.316819   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:35.339523   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:35.339529   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:35.350758   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:35.350768   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:35.362283   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:35.362294   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:35.366438   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:35.366444   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:35.400131   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:35.400141   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:35.415298   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:35.415308   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:35.433191   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:35.433203   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:35.448531   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:35.448543   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:35.463222   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:35.463233   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:35.487952   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:35.487962   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:35.510612   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:35.510621   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:38.032924   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:43.035059   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:43.035288   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:43.059669   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:43.059773   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:43.074994   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:43.075074   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:43.088284   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:43.088355   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:43.099127   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:43.099202   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:43.108999   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:43.109068   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:43.120416   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:43.120486   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:43.130817   37047 logs.go:276] 0 containers: []
	W0513 17:37:43.130827   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:43.130880   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:43.141386   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:43.141403   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:43.141408   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:43.179315   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:43.179331   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:43.194068   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:43.194078   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:43.214092   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:43.214103   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:43.228159   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:43.228168   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:43.239807   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:43.239818   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:43.254678   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:43.254689   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:43.265540   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:43.265551   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:43.277454   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:43.277464   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:43.299179   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:43.299191   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:43.317606   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:43.317617   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:43.329386   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:43.329396   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:43.367127   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:43.367145   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:43.371624   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:43.371631   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:43.385887   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:43.385898   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:43.411667   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:43.411677   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:43.423294   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:43.423306   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:45.949231   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:50.951521   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:50.951638   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:50.965286   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:50.965359   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:50.977145   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:50.977205   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:50.988487   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:50.988549   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:50.999019   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:50.999089   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:51.009268   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:51.009338   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:51.019625   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:51.019690   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:51.029684   37047 logs.go:276] 0 containers: []
	W0513 17:37:51.029696   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:51.029749   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:51.040220   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:51.040237   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:51.040242   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:51.076684   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:51.076692   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:51.090766   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:51.090780   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:51.105481   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:51.105494   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:51.122819   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:51.122830   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:51.140597   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:51.141169   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:51.180547   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:51.180566   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:51.194906   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:51.194920   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:51.212334   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:51.212348   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:51.225735   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:51.225747   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:51.251532   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:51.251543   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:51.263070   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:51.263080   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:51.281684   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:51.281694   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:51.293732   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:51.293746   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:51.316833   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:51.316843   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:51.321233   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:51.321239   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:51.335562   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:51.335571   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:37:53.851821   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:37:58.854073   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:37:58.854274   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:37:58.877021   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:37:58.877106   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:37:58.890226   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:37:58.890295   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:37:58.902106   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:37:58.902175   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:37:58.915698   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:37:58.915767   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:37:58.926085   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:37:58.926148   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:37:58.936529   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:37:58.936595   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:37:58.949102   37047 logs.go:276] 0 containers: []
	W0513 17:37:58.949118   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:37:58.949180   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:37:58.960058   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:37:58.960075   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:37:58.960082   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:37:58.997842   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:37:58.997849   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:37:59.001901   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:37:59.001910   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:37:59.019159   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:37:59.019169   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:37:59.042621   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:37:59.042629   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:37:59.057155   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:37:59.057165   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:37:59.073237   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:37:59.073250   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:37:59.084723   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:37:59.084735   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:37:59.097508   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:37:59.097520   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:37:59.143060   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:37:59.143077   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:37:59.158222   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:37:59.158235   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:37:59.185352   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:37:59.185366   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:37:59.200739   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:37:59.200750   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:37:59.212048   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:37:59.212058   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:37:59.226739   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:37:59.226750   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:37:59.253593   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:37:59.253603   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:37:59.264745   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:37:59.264758   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:38:01.776590   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:06.776975   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:06.777180   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:06.799416   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:38:06.799517   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:06.814696   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:38:06.814765   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:06.826971   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:38:06.827045   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:06.837945   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:38:06.838008   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:06.855315   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:38:06.855382   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:06.866130   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:38:06.866189   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:06.875662   37047 logs.go:276] 0 containers: []
	W0513 17:38:06.875672   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:06.875723   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:06.886343   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:38:06.886361   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:06.886367   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:06.890791   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:38:06.890798   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:38:06.915534   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:38:06.915544   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:38:06.930231   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:38:06.930241   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:38:06.941751   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:38:06.941760   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:38:06.963177   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:06.963187   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:07.001065   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:38:07.001073   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:38:07.019091   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:38:07.019102   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:38:07.035718   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:38:07.035729   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:38:07.047493   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:38:07.047504   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:38:07.058852   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:38:07.058862   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:07.070726   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:07.070737   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:07.106848   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:38:07.106860   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:38:07.121529   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:38:07.121538   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:38:07.135615   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:38:07.135625   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:38:07.147223   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:38:07.147234   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:38:07.159353   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:07.159363   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:09.686071   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:14.688363   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:14.688497   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:14.701125   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:38:14.701193   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:14.711984   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:38:14.712058   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:14.722189   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:38:14.722255   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:14.734070   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:38:14.734140   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:14.744599   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:38:14.744662   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:14.758748   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:38:14.758813   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:14.768640   37047 logs.go:276] 0 containers: []
	W0513 17:38:14.768658   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:14.768710   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:14.779264   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:38:14.779283   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:14.779288   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:14.802159   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:38:14.802166   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:38:14.826698   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:38:14.826708   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:38:14.845070   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:38:14.845082   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:38:14.869908   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:38:14.869920   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:38:14.884219   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:38:14.884230   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:14.895983   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:38:14.895993   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:38:14.912749   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:38:14.912761   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:38:14.928153   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:38:14.928164   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:38:14.939996   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:38:14.940007   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:38:14.953059   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:14.953069   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:14.991128   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:14.991143   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:14.995981   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:38:14.995986   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:38:15.010587   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:38:15.010597   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:38:15.021572   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:15.021583   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:15.057043   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:38:15.057057   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:38:15.068660   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:38:15.068671   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:38:17.587940   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:22.590436   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:22.590692   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:22.617662   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:38:22.617783   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:22.635935   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:38:22.636011   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:22.649800   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:38:22.649868   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:22.661279   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:38:22.661349   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:22.671877   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:38:22.671937   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:22.682974   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:38:22.683033   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:22.693124   37047 logs.go:276] 0 containers: []
	W0513 17:38:22.693135   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:22.693193   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:22.703710   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:38:22.703730   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:22.703736   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:22.708320   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:38:22.708330   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:38:22.722244   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:22.722258   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:22.759188   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:38:22.759195   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:38:22.773009   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:38:22.773020   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:38:22.784512   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:38:22.784523   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:38:22.805237   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:38:22.805247   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:38:22.818925   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:38:22.818934   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:38:22.833966   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:38:22.833977   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:38:22.848826   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:22.848839   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:22.870828   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:38:22.870840   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:22.882528   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:22.882537   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:22.918182   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:38:22.918191   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:38:22.943252   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:38:22.943263   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:38:22.961924   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:38:22.961936   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:38:22.974134   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:38:22.974145   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:38:22.992584   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:38:22.992594   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:38:25.505988   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:30.508186   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:30.508357   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:30.526829   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:38:30.526927   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:30.541663   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:38:30.541739   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:30.554275   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:38:30.554344   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:30.564979   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:38:30.565052   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:30.576901   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:38:30.576966   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:30.587220   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:38:30.587284   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:30.597762   37047 logs.go:276] 0 containers: []
	W0513 17:38:30.597773   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:30.597827   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:30.608365   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:38:30.608382   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:30.608389   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:30.646449   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:38:30.646459   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:38:30.660878   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:38:30.660891   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:38:30.673218   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:38:30.673228   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:38:30.688189   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:38:30.688205   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:38:30.707206   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:38:30.707221   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:38:30.721364   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:30.721379   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:30.755911   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:38:30.755925   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:38:30.769759   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:38:30.769770   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:38:30.780792   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:38:30.780803   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:38:30.792283   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:38:30.792294   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:30.804273   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:30.804283   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:30.808213   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:38:30.808222   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:38:30.832924   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:38:30.832933   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:38:30.845308   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:38:30.845322   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:38:30.866407   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:38:30.866418   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:38:30.877656   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:30.877666   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:33.403212   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:38.404678   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:38.404794   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:38.432716   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:38:38.432790   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:38.448942   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:38:38.449011   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:38.459694   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:38:38.459775   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:38.470670   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:38:38.470730   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:38.480793   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:38:38.480849   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:38.491215   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:38:38.491279   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:38.501503   37047 logs.go:276] 0 containers: []
	W0513 17:38:38.501514   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:38.501593   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:38.512480   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:38:38.512501   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:38.512507   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:38.516635   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:38.516642   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:38.550829   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:38:38.550841   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:38:38.565527   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:38:38.565537   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:38:38.576514   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:38:38.576525   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:38:38.593800   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:38.593812   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:38.615854   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:38:38.615864   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:38:38.641416   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:38:38.641430   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:38:38.661421   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:38:38.661431   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:38:38.672723   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:38:38.672733   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:38.686042   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:38:38.686052   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:38:38.697763   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:38:38.697772   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:38:38.719843   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:38:38.719852   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:38:38.731748   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:38.731758   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:38.768153   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:38:38.768161   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:38:38.781923   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:38:38.781938   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:38:38.803807   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:38:38.803819   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:38:41.317194   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:46.319308   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:46.319503   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:38:46.337685   37047 logs.go:276] 2 containers: [3fb8f18ccf20 efba4f55cfe3]
	I0513 17:38:46.337768   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:38:46.351217   37047 logs.go:276] 2 containers: [1cba36bf651d 47dfe97c593d]
	I0513 17:38:46.351304   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:38:46.363954   37047 logs.go:276] 1 containers: [f8f108d5f5bc]
	I0513 17:38:46.364015   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:38:46.374934   37047 logs.go:276] 2 containers: [c2b35816c4a8 2f96dad126c2]
	I0513 17:38:46.375006   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:38:46.395495   37047 logs.go:276] 1 containers: [1db755f6b146]
	I0513 17:38:46.395563   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:38:46.408515   37047 logs.go:276] 2 containers: [f6b69211d4c7 b3d353a21008]
	I0513 17:38:46.408590   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:38:46.423458   37047 logs.go:276] 0 containers: []
	W0513 17:38:46.423469   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:38:46.423526   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:38:46.434072   37047 logs.go:276] 2 containers: [8fa1dcc54dc2 2fdfaf9f57af]
	I0513 17:38:46.434089   37047 logs.go:123] Gathering logs for kube-apiserver [efba4f55cfe3] ...
	I0513 17:38:46.434094   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efba4f55cfe3"
	I0513 17:38:46.459251   37047 logs.go:123] Gathering logs for storage-provisioner [8fa1dcc54dc2] ...
	I0513 17:38:46.459267   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fa1dcc54dc2"
	I0513 17:38:46.471459   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:38:46.471472   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:38:46.475455   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:38:46.475463   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:38:46.515367   37047 logs.go:123] Gathering logs for kube-apiserver [3fb8f18ccf20] ...
	I0513 17:38:46.515379   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fb8f18ccf20"
	I0513 17:38:46.529559   37047 logs.go:123] Gathering logs for kube-proxy [1db755f6b146] ...
	I0513 17:38:46.529569   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1db755f6b146"
	I0513 17:38:46.541005   37047 logs.go:123] Gathering logs for kube-controller-manager [f6b69211d4c7] ...
	I0513 17:38:46.541017   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b69211d4c7"
	I0513 17:38:46.557946   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:38:46.557958   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:38:46.579953   37047 logs.go:123] Gathering logs for coredns [f8f108d5f5bc] ...
	I0513 17:38:46.579960   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8f108d5f5bc"
	I0513 17:38:46.591409   37047 logs.go:123] Gathering logs for kube-scheduler [2f96dad126c2] ...
	I0513 17:38:46.591420   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f96dad126c2"
	I0513 17:38:46.606481   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:38:46.606491   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:38:46.643009   37047 logs.go:123] Gathering logs for etcd [1cba36bf651d] ...
	I0513 17:38:46.643017   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cba36bf651d"
	I0513 17:38:46.657421   37047 logs.go:123] Gathering logs for etcd [47dfe97c593d] ...
	I0513 17:38:46.657431   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47dfe97c593d"
	I0513 17:38:46.673136   37047 logs.go:123] Gathering logs for kube-scheduler [c2b35816c4a8] ...
	I0513 17:38:46.673146   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b35816c4a8"
	I0513 17:38:46.685110   37047 logs.go:123] Gathering logs for kube-controller-manager [b3d353a21008] ...
	I0513 17:38:46.685120   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3d353a21008"
	I0513 17:38:46.704081   37047 logs.go:123] Gathering logs for storage-provisioner [2fdfaf9f57af] ...
	I0513 17:38:46.704091   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2fdfaf9f57af"
	I0513 17:38:46.715919   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:38:46.715931   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:38:49.231095   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:38:54.232605   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:38:54.232694   37047 kubeadm.go:591] duration metric: took 4m3.722933708s to restartPrimaryControlPlane
	W0513 17:38:54.232757   37047 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0513 17:38:54.232783   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0513 17:38:55.293976   37047 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.061202041s)
	I0513 17:38:55.294044   37047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0513 17:38:55.299254   37047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0513 17:38:55.301997   37047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0513 17:38:55.305121   37047 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0513 17:38:55.305127   37047 kubeadm.go:156] found existing configuration files:
	
	I0513 17:38:55.305148   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/admin.conf
	I0513 17:38:55.308321   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0513 17:38:55.308349   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0513 17:38:55.311030   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/kubelet.conf
	I0513 17:38:55.313423   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0513 17:38:55.313443   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0513 17:38:55.316618   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/controller-manager.conf
	I0513 17:38:55.319613   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0513 17:38:55.319639   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0513 17:38:55.322037   37047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/scheduler.conf
	I0513 17:38:55.324823   37047 kubeadm.go:162] "https://control-plane.minikube.internal:56308" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:56308 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0513 17:38:55.324844   37047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0513 17:38:55.327834   37047 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0513 17:38:55.344975   37047 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0513 17:38:55.345001   37047 kubeadm.go:309] [preflight] Running pre-flight checks
	I0513 17:38:55.399663   37047 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0513 17:38:55.399714   37047 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0513 17:38:55.399770   37047 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0513 17:38:55.447893   37047 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0513 17:38:55.451131   37047 out.go:204]   - Generating certificates and keys ...
	I0513 17:38:55.451170   37047 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0513 17:38:55.451207   37047 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0513 17:38:55.451251   37047 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0513 17:38:55.451286   37047 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0513 17:38:55.451323   37047 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0513 17:38:55.451357   37047 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0513 17:38:55.451390   37047 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0513 17:38:55.451419   37047 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0513 17:38:55.451456   37047 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0513 17:38:55.451493   37047 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0513 17:38:55.451510   37047 kubeadm.go:309] [certs] Using the existing "sa" key
	I0513 17:38:55.451538   37047 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0513 17:38:55.742390   37047 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0513 17:38:55.907583   37047 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0513 17:38:55.988389   37047 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0513 17:38:56.177727   37047 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0513 17:38:56.208755   37047 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0513 17:38:56.209068   37047 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0513 17:38:56.209131   37047 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0513 17:38:56.299179   37047 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0513 17:38:56.307323   37047 out.go:204]   - Booting up control plane ...
	I0513 17:38:56.307376   37047 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0513 17:38:56.307429   37047 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0513 17:38:56.307465   37047 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0513 17:38:56.307521   37047 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0513 17:38:56.307626   37047 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0513 17:39:00.809009   37047 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504488 seconds
	I0513 17:39:00.809119   37047 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0513 17:39:00.812832   37047 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0513 17:39:01.324718   37047 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0513 17:39:01.324905   37047 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-201000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0513 17:39:01.830099   37047 kubeadm.go:309] [bootstrap-token] Using token: rm9hda.wbqm6wosqfjby2vj
	I0513 17:39:01.833827   37047 out.go:204]   - Configuring RBAC rules ...
	I0513 17:39:01.833884   37047 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0513 17:39:01.833931   37047 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0513 17:39:01.841369   37047 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0513 17:39:01.842234   37047 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0513 17:39:01.843115   37047 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0513 17:39:01.843894   37047 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0513 17:39:01.847045   37047 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0513 17:39:01.994264   37047 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0513 17:39:02.235119   37047 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0513 17:39:02.235129   37047 kubeadm.go:309] 
	I0513 17:39:02.235165   37047 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0513 17:39:02.235171   37047 kubeadm.go:309] 
	I0513 17:39:02.235208   37047 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0513 17:39:02.235255   37047 kubeadm.go:309] 
	I0513 17:39:02.235278   37047 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0513 17:39:02.235306   37047 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0513 17:39:02.235334   37047 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0513 17:39:02.235337   37047 kubeadm.go:309] 
	I0513 17:39:02.235365   37047 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0513 17:39:02.235368   37047 kubeadm.go:309] 
	I0513 17:39:02.235394   37047 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0513 17:39:02.235398   37047 kubeadm.go:309] 
	I0513 17:39:02.235424   37047 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0513 17:39:02.235459   37047 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0513 17:39:02.235497   37047 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0513 17:39:02.235505   37047 kubeadm.go:309] 
	I0513 17:39:02.235549   37047 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0513 17:39:02.235591   37047 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0513 17:39:02.235598   37047 kubeadm.go:309] 
	I0513 17:39:02.235646   37047 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token rm9hda.wbqm6wosqfjby2vj \
	I0513 17:39:02.235822   37047 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4d136b3dcf7c79b0a3d022a6d22a2cfa7847863b6538a3d210720b96945b0713 \
	I0513 17:39:02.235835   37047 kubeadm.go:309] 	--control-plane 
	I0513 17:39:02.235837   37047 kubeadm.go:309] 
	I0513 17:39:02.235876   37047 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0513 17:39:02.235882   37047 kubeadm.go:309] 
	I0513 17:39:02.235925   37047 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token rm9hda.wbqm6wosqfjby2vj \
	I0513 17:39:02.235992   37047 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4d136b3dcf7c79b0a3d022a6d22a2cfa7847863b6538a3d210720b96945b0713 
	I0513 17:39:02.236065   37047 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0513 17:39:02.236074   37047 cni.go:84] Creating CNI manager for ""
	I0513 17:39:02.236081   37047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:39:02.239657   37047 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0513 17:39:02.246657   37047 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0513 17:39:02.249478   37047 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0513 17:39:02.254214   37047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0513 17:39:02.254248   37047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0513 17:39:02.254270   37047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-201000 minikube.k8s.io/updated_at=2024_05_13T17_39_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=bf4e5d623f67cc0fbec852b09e6284e0ebf63761 minikube.k8s.io/name=stopped-upgrade-201000 minikube.k8s.io/primary=true
	I0513 17:39:02.296264   37047 ops.go:34] apiserver oom_adj: -16
	I0513 17:39:02.296264   37047 kubeadm.go:1107] duration metric: took 42.045584ms to wait for elevateKubeSystemPrivileges
	W0513 17:39:02.296384   37047 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0513 17:39:02.296391   37047 kubeadm.go:393] duration metric: took 4m11.800065375s to StartCluster
	I0513 17:39:02.296401   37047 settings.go:142] acquiring lock: {Name:mk9ef358ebdddf34ee47447e0095ef8dc921e138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:39:02.296496   37047 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:39:02.296938   37047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/kubeconfig: {Name:mk4053cf25e56f4e4112583f80c31fb87a4c6322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:39:02.297142   37047 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:39:02.301493   37047 out.go:177] * Verifying Kubernetes components...
	I0513 17:39:02.297151   37047 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0513 17:39:02.297226   37047 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:39:02.309694   37047 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-201000"
	I0513 17:39:02.309700   37047 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-201000"
	I0513 17:39:02.309709   37047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0513 17:39:02.309717   37047 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-201000"
	W0513 17:39:02.309725   37047 addons.go:243] addon storage-provisioner should already be in state true
	I0513 17:39:02.309744   37047 host.go:66] Checking if "stopped-upgrade-201000" exists ...
	I0513 17:39:02.309717   37047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-201000"
	I0513 17:39:02.310188   37047 retry.go:31] will retry after 733.363226ms: connect: dial unix /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/monitor: connect: connection refused
	I0513 17:39:02.314656   37047 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0513 17:39:02.318729   37047 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 17:39:02.318735   37047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0513 17:39:02.318742   37047 sshutil.go:53] new ssh client: &{IP:localhost Port:56273 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/id_rsa Username:docker}
	I0513 17:39:02.400611   37047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0513 17:39:02.406953   37047 api_server.go:52] waiting for apiserver process to appear ...
	I0513 17:39:02.407021   37047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0513 17:39:02.410923   37047 api_server.go:72] duration metric: took 113.767875ms to wait for apiserver process to appear ...
	I0513 17:39:02.410930   37047 api_server.go:88] waiting for apiserver healthz status ...
	I0513 17:39:02.410938   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:02.467932   37047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0513 17:39:03.046602   37047 kapi.go:59] client config for stopped-upgrade-201000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/stopped-upgrade-201000/client.key", CAFile:"/Users/jenkins/minikube-integration/18872-34554/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ca1e10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0513 17:39:03.046763   37047 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-201000"
	W0513 17:39:03.046769   37047 addons.go:243] addon default-storageclass should already be in state true
	I0513 17:39:03.046781   37047 host.go:66] Checking if "stopped-upgrade-201000" exists ...
	I0513 17:39:03.047449   37047 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0513 17:39:03.047454   37047 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0513 17:39:03.047460   37047 sshutil.go:53] new ssh client: &{IP:localhost Port:56273 SSHKeyPath:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/stopped-upgrade-201000/id_rsa Username:docker}
	I0513 17:39:03.086288   37047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0513 17:39:07.413085   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:07.413173   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:12.413847   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:12.413889   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:17.414332   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:17.414371   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:22.415016   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:22.415066   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:27.415810   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:27.415853   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:32.416881   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:32.416924   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0513 17:39:33.191932   37047 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0513 17:39:33.196903   37047 out.go:177] * Enabled addons: storage-provisioner
	I0513 17:39:33.207761   37047 addons.go:505] duration metric: took 30.911227292s for enable addons: enabled=[storage-provisioner]
	I0513 17:39:37.418191   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:37.418209   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:42.419782   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:42.419825   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:47.421895   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:47.421938   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:52.424084   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:52.424115   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:39:57.426222   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:39:57.426267   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:40:02.428402   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:40:02.428546   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:40:02.443107   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:40:02.443190   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:40:02.455337   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:40:02.455411   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:40:02.465603   37047 logs.go:276] 2 containers: [c225f01c1531 eb9b3325d858]
	I0513 17:40:02.465683   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:40:02.475808   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:40:02.475872   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:40:02.486149   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:40:02.486219   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:40:02.496350   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:40:02.496422   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:40:02.506746   37047 logs.go:276] 0 containers: []
	W0513 17:40:02.506759   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:40:02.506832   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:40:02.517760   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:40:02.517776   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:40:02.517788   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:40:02.535096   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:40:02.535106   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:40:02.548063   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:40:02.548076   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:40:02.583576   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:40:02.583588   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:40:02.597915   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:40:02.597924   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:40:02.612349   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:40:02.612360   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:40:02.624382   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:40:02.624395   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:40:02.636081   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:40:02.636092   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:40:02.647883   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:40:02.647894   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:40:02.659820   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:40:02.659832   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:40:02.683399   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:40:02.683406   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:40:02.720280   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:40:02.720295   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:40:02.724646   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:40:02.724653   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:40:05.240708   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:40:10.242948   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:40:10.243122   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:40:10.254444   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:40:10.254511   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:40:10.265586   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:40:10.265658   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:40:10.276156   37047 logs.go:276] 2 containers: [c225f01c1531 eb9b3325d858]
	I0513 17:40:10.276222   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:40:10.287008   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:40:10.287066   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:40:10.300109   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:40:10.300168   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:40:10.310437   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:40:10.310491   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:40:10.320590   37047 logs.go:276] 0 containers: []
	W0513 17:40:10.320599   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:40:10.320643   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:40:10.332459   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:40:10.332473   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:40:10.332480   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:40:10.349681   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:40:10.349690   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:40:10.361329   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:40:10.361340   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:40:10.365552   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:40:10.365560   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:40:10.400320   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:40:10.400330   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:40:10.414399   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:40:10.414408   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:40:10.431783   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:40:10.431792   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:40:10.443427   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:40:10.443437   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:40:10.454784   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:40:10.454795   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:40:10.491835   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:40:10.491844   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:40:10.506160   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:40:10.506168   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:40:10.518146   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:40:10.518160   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:40:10.529456   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:40:10.529466   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:40:13.056015   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:40:18.058618   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:40:18.058769   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:40:18.077369   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:40:18.077443   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:40:18.092549   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:40:18.092611   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:40:18.103293   37047 logs.go:276] 2 containers: [c225f01c1531 eb9b3325d858]
	I0513 17:40:18.103358   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:40:18.119889   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:40:18.119952   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:40:18.130692   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:40:18.130753   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:40:18.141128   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:40:18.141189   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:40:18.152793   37047 logs.go:276] 0 containers: []
	W0513 17:40:18.152806   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:40:18.152858   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:40:18.163200   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:40:18.163216   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:40:18.163221   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:40:18.181010   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:40:18.181020   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:40:18.217410   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:40:18.217421   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:40:18.221780   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:40:18.221788   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:40:18.257280   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:40:18.257293   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:40:18.269700   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:40:18.269712   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:40:18.285242   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:40:18.285254   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:40:18.301171   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:40:18.301182   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:40:18.313315   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:40:18.313325   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:40:18.339060   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:40:18.339072   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:40:18.350995   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:40:18.351010   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:40:18.365039   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:40:18.365049   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:40:18.378919   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:40:18.378929   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:40:20.892699   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:40:25.893660   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:40:25.893818   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:40:25.908095   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:40:25.908168   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:40:25.919382   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:40:25.919447   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:40:25.930719   37047 logs.go:276] 2 containers: [c225f01c1531 eb9b3325d858]
	I0513 17:40:25.930793   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:40:25.941448   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:40:25.941514   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:40:25.951631   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:40:25.951694   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:40:25.962010   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:40:25.962073   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:40:25.972339   37047 logs.go:276] 0 containers: []
	W0513 17:40:25.972350   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:40:25.972405   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:40:25.987521   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:40:25.987535   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:40:25.987540   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:40:26.025511   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:40:26.025521   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:40:26.039133   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:40:26.039142   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:40:26.050301   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:40:26.050311   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:40:26.064660   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:40:26.064672   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:40:26.087014   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:40:26.087027   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:40:26.100745   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:40:26.100758   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:40:26.138558   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:40:26.139889   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:40:26.144098   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:40:26.144104   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:40:26.155040   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:40:26.155052   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:40:26.167130   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:40:26.167140   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:40:26.190638   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:40:26.190648   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:40:26.205133   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:40:26.205144   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:40:28.719545   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:40:33.721705   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:40:33.721834   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:40:33.733840   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:40:33.733931   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:40:33.746826   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:40:33.746922   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:40:33.759450   37047 logs.go:276] 2 containers: [c225f01c1531 eb9b3325d858]
	I0513 17:40:33.759531   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:40:33.773332   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:40:33.773399   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:40:33.786770   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:40:33.786846   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:40:33.810353   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:40:33.810440   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:40:33.823592   37047 logs.go:276] 0 containers: []
	W0513 17:40:33.823604   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:40:33.823695   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:40:33.844686   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:40:33.844754   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:40:33.844781   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:40:33.861709   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:40:33.861721   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:40:33.885340   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:40:33.885356   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:40:33.919182   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:40:33.919194   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:40:33.931471   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:40:33.931485   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:40:33.983438   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:40:33.983454   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:40:34.006366   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:40:34.006380   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:40:34.021340   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:40:34.021353   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:40:34.039889   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:40:34.039908   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:40:34.054168   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:40:34.054187   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:40:34.095351   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:40:34.095370   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:40:34.100325   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:40:34.100336   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:40:34.126607   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:40:34.126619   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:40:36.639326   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:40:41.642081   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:40:41.642300   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:40:41.668843   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:40:41.668930   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:40:41.681821   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:40:41.681882   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:40:41.693507   37047 logs.go:276] 2 containers: [c225f01c1531 eb9b3325d858]
	I0513 17:40:41.693561   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:40:41.703950   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:40:41.704020   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:40:41.714514   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:40:41.714581   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:40:41.724775   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:40:41.724842   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:40:41.734895   37047 logs.go:276] 0 containers: []
	W0513 17:40:41.734903   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:40:41.734945   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:40:41.745421   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:40:41.745437   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:40:41.745442   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:40:41.750425   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:40:41.750432   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:40:41.769045   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:40:41.769061   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:40:41.783604   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:40:41.783619   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:40:41.797897   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:40:41.797910   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:40:41.811325   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:40:41.811341   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:40:41.837271   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:40:41.837296   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:40:41.851193   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:40:41.851207   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:40:41.888352   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:40:41.888369   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:40:41.925662   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:40:41.925672   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:40:41.939720   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:40:41.939730   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:40:41.956438   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:40:41.956448   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:40:41.972414   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:40:41.972425   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:40:44.492632   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:40:49.494836   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:40:49.495287   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:40:49.536060   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:40:49.536197   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:40:49.559010   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:40:49.559111   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:40:49.573835   37047 logs.go:276] 2 containers: [c225f01c1531 eb9b3325d858]
	I0513 17:40:49.573910   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:40:49.586096   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:40:49.586151   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:40:49.596674   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:40:49.596733   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:40:49.607323   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:40:49.607380   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:40:49.617431   37047 logs.go:276] 0 containers: []
	W0513 17:40:49.617444   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:40:49.617500   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:40:49.628780   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:40:49.628793   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:40:49.628801   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:40:49.671716   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:40:49.671726   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:40:49.687000   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:40:49.687015   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:40:49.701859   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:40:49.701871   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:40:49.726393   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:40:49.726404   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:40:49.737942   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:40:49.737953   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:40:49.774518   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:40:49.774525   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:40:49.778780   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:40:49.778786   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:40:49.792691   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:40:49.792700   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:40:49.804529   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:40:49.804537   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:40:49.821566   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:40:49.821576   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:40:49.833291   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:40:49.833301   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:40:49.847539   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:40:49.847548   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:40:52.361549   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:40:57.364005   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:40:57.364495   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:40:57.403358   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:40:57.403491   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:40:57.424731   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:40:57.424831   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:40:57.440441   37047 logs.go:276] 2 containers: [c225f01c1531 eb9b3325d858]
	I0513 17:40:57.440503   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:40:57.453581   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:40:57.453649   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:40:57.464972   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:40:57.465032   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:40:57.475687   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:40:57.475754   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:40:57.486247   37047 logs.go:276] 0 containers: []
	W0513 17:40:57.486260   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:40:57.486313   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:40:57.503702   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:40:57.503717   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:40:57.503721   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:40:57.518175   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:40:57.518184   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:40:57.529951   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:40:57.529964   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:40:57.534503   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:40:57.534508   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:40:57.570650   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:40:57.570662   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:40:57.586661   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:40:57.586673   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:40:57.597763   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:40:57.597775   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:40:57.609162   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:40:57.609173   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:40:57.620734   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:40:57.620746   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:40:57.658520   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:40:57.658529   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:40:57.671835   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:40:57.671847   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:40:57.689255   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:40:57.689265   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:40:57.700936   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:40:57.700949   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:41:00.226562   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:41:05.228757   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:41:05.228942   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:41:05.257406   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:41:05.257529   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:41:05.276500   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:41:05.276578   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:41:05.297623   37047 logs.go:276] 2 containers: [c225f01c1531 eb9b3325d858]
	I0513 17:41:05.297688   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:41:05.308826   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:41:05.308894   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:41:05.319339   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:41:05.319406   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:41:05.329417   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:41:05.329482   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:41:05.339555   37047 logs.go:276] 0 containers: []
	W0513 17:41:05.339565   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:41:05.339617   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:41:05.353530   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:41:05.353544   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:41:05.353550   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:41:05.365278   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:41:05.365289   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:41:05.390259   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:41:05.390266   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:41:05.401475   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:41:05.401488   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:41:05.415720   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:41:05.415732   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:41:05.427279   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:41:05.427293   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:41:05.441231   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:41:05.441242   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:41:05.455732   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:41:05.455742   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:41:05.467592   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:41:05.467602   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:41:05.478954   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:41:05.478966   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:41:05.496844   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:41:05.496854   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:41:05.532192   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:41:05.532199   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:41:05.536410   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:41:05.536418   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:41:08.073398   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:41:13.075292   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:41:13.075616   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:41:13.110229   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:41:13.110350   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:41:13.129656   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:41:13.129738   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:41:13.144176   37047 logs.go:276] 2 containers: [c225f01c1531 eb9b3325d858]
	I0513 17:41:13.144252   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:41:13.156097   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:41:13.156166   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:41:13.166689   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:41:13.166763   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:41:13.177507   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:41:13.177563   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:41:13.188835   37047 logs.go:276] 0 containers: []
	W0513 17:41:13.188846   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:41:13.188897   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:41:13.203147   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:41:13.203161   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:41:13.203167   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:41:13.237453   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:41:13.237464   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:41:13.251279   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:41:13.251291   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:41:13.263020   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:41:13.263031   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:41:13.274702   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:41:13.274713   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:41:13.288730   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:41:13.288741   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:41:13.304591   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:41:13.304602   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:41:13.328156   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:41:13.328164   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:41:13.363211   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:41:13.363225   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:41:13.375462   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:41:13.375472   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:41:13.389968   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:41:13.389978   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:41:13.407467   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:41:13.407477   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:41:13.425567   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:41:13.425580   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:41:15.931994   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:41:20.933375   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:41:20.933723   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:41:20.973263   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:41:20.973389   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:41:20.998453   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:41:20.998535   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:41:21.012928   37047 logs.go:276] 4 containers: [466d218d39f5 473f7837a8ea c225f01c1531 eb9b3325d858]
	I0513 17:41:21.013001   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:41:21.024697   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:41:21.024755   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:41:21.037562   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:41:21.037634   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:41:21.048815   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:41:21.048871   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:41:21.058761   37047 logs.go:276] 0 containers: []
	W0513 17:41:21.058776   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:41:21.058835   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:41:21.069402   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:41:21.069420   37047 logs.go:123] Gathering logs for coredns [466d218d39f5] ...
	I0513 17:41:21.069425   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 466d218d39f5"
	I0513 17:41:21.080904   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:41:21.080916   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:41:21.116514   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:41:21.116523   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:41:21.120869   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:41:21.120876   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:41:21.135274   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:41:21.136799   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:41:21.151029   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:41:21.151041   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:41:21.162951   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:41:21.162964   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:41:21.175175   37047 logs.go:123] Gathering logs for coredns [473f7837a8ea] ...
	I0513 17:41:21.175186   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 473f7837a8ea"
	I0513 17:41:21.187066   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:41:21.187076   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:41:21.199215   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:41:21.199227   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:41:21.212208   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:41:21.212219   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:41:21.229579   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:41:21.229590   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:41:21.253037   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:41:21.253045   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:41:21.288720   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:41:21.288733   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:41:21.301270   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:41:21.301282   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:41:23.817155   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:41:28.818629   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:41:28.818989   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:41:28.864003   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:41:28.864118   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:41:28.884134   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:41:28.884212   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:41:28.898830   37047 logs.go:276] 4 containers: [466d218d39f5 473f7837a8ea c225f01c1531 eb9b3325d858]
	I0513 17:41:28.898917   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:41:28.911127   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:41:28.911207   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:41:28.922403   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:41:28.922466   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:41:28.932788   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:41:28.932853   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:41:28.943216   37047 logs.go:276] 0 containers: []
	W0513 17:41:28.943228   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:41:28.943283   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:41:28.958037   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:41:28.958052   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:41:28.958057   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:41:28.972114   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:41:28.972127   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:41:29.006913   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:41:29.006922   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:41:29.022133   37047 logs.go:123] Gathering logs for coredns [466d218d39f5] ...
	I0513 17:41:29.022147   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 466d218d39f5"
	I0513 17:41:29.033725   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:41:29.033735   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:41:29.045909   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:41:29.045921   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:41:29.057353   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:41:29.057366   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:41:29.061575   37047 logs.go:123] Gathering logs for coredns [473f7837a8ea] ...
	I0513 17:41:29.061582   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 473f7837a8ea"
	I0513 17:41:29.076837   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:41:29.076847   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:41:29.088744   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:41:29.088757   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:41:29.101154   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:41:29.101167   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:41:29.131720   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:41:29.131733   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:41:29.169062   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:41:29.169072   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:41:29.182795   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:41:29.182805   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:41:29.195844   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:41:29.195854   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:41:31.721922   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:41:36.724507   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:41:36.724834   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:41:36.761990   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:41:36.762110   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:41:36.782377   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:41:36.782482   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:41:36.797802   37047 logs.go:276] 4 containers: [466d218d39f5 473f7837a8ea c225f01c1531 eb9b3325d858]
	I0513 17:41:36.797869   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:41:36.812262   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:41:36.812324   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:41:36.823029   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:41:36.823099   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:41:36.833102   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:41:36.833157   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:41:36.843299   37047 logs.go:276] 0 containers: []
	W0513 17:41:36.843309   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:41:36.843364   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:41:36.854530   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:41:36.854547   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:41:36.854553   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:41:36.871781   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:41:36.871792   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:41:36.907146   37047 logs.go:123] Gathering logs for coredns [473f7837a8ea] ...
	I0513 17:41:36.907160   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 473f7837a8ea"
	I0513 17:41:36.918592   37047 logs.go:123] Gathering logs for coredns [466d218d39f5] ...
	I0513 17:41:36.918605   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 466d218d39f5"
	I0513 17:41:36.930194   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:41:36.930207   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:41:36.941844   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:41:36.941854   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:41:36.959677   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:41:36.959690   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:41:36.971932   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:41:36.971942   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:41:36.986777   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:41:36.986787   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:41:36.998322   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:41:36.998333   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:41:37.033804   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:41:37.033811   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:41:37.037704   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:41:37.037713   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:41:37.057827   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:41:37.057837   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:41:37.082808   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:41:37.082815   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:41:37.097133   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:41:37.097143   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:41:39.610830   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:41:44.613278   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:41:44.613745   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:41:44.656371   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:41:44.656480   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:41:44.675168   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:41:44.675242   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:41:44.689945   37047 logs.go:276] 4 containers: [466d218d39f5 473f7837a8ea c225f01c1531 eb9b3325d858]
	I0513 17:41:44.690007   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:41:44.701808   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:41:44.701863   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:41:44.712435   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:41:44.712508   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:41:44.722650   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:41:44.722717   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:41:44.732590   37047 logs.go:276] 0 containers: []
	W0513 17:41:44.732600   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:41:44.732653   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:41:44.742985   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:41:44.743001   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:41:44.743006   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:41:44.763665   37047 logs.go:123] Gathering logs for coredns [466d218d39f5] ...
	I0513 17:41:44.763677   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 466d218d39f5"
	I0513 17:41:44.775695   37047 logs.go:123] Gathering logs for coredns [473f7837a8ea] ...
	I0513 17:41:44.775706   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 473f7837a8ea"
	I0513 17:41:44.787429   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:41:44.787441   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:41:44.801601   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:41:44.801614   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:41:44.813274   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:41:44.813284   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:41:44.848165   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:41:44.848175   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:41:44.852257   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:41:44.852266   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:41:44.870555   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:41:44.870567   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:41:44.895867   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:41:44.895874   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:41:44.930186   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:41:44.930199   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:41:44.944570   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:41:44.944577   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:41:44.959242   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:41:44.959250   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:41:44.976096   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:41:44.976107   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:41:44.988571   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:41:44.988580   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:41:47.508530   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:41:52.510536   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:41:52.510845   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:41:52.541762   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:41:52.541889   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:41:52.560437   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:41:52.560523   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:41:52.574770   37047 logs.go:276] 4 containers: [466d218d39f5 473f7837a8ea c225f01c1531 eb9b3325d858]
	I0513 17:41:52.574846   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:41:52.586752   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:41:52.586822   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:41:52.597410   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:41:52.597470   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:41:52.608204   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:41:52.608273   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:41:52.618354   37047 logs.go:276] 0 containers: []
	W0513 17:41:52.618365   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:41:52.618412   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:41:52.628969   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:41:52.628985   37047 logs.go:123] Gathering logs for coredns [473f7837a8ea] ...
	I0513 17:41:52.628989   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 473f7837a8ea"
	I0513 17:41:52.640720   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:41:52.640733   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:41:52.663908   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:41:52.663916   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:41:52.677862   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:41:52.677871   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:41:52.690025   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:41:52.690035   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:41:52.704256   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:41:52.704266   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:41:52.716311   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:41:52.716323   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:41:52.728310   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:41:52.728319   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:41:52.742078   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:41:52.742087   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:41:52.753896   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:41:52.753906   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:41:52.766321   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:41:52.766329   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:41:52.803928   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:41:52.803942   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:41:52.808797   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:41:52.808811   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:41:52.850814   37047 logs.go:123] Gathering logs for coredns [466d218d39f5] ...
	I0513 17:41:52.850833   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 466d218d39f5"
	I0513 17:41:52.864075   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:41:52.864091   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:41:55.386075   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:42:00.388250   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:42:00.388643   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:42:00.420596   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:42:00.420710   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:42:00.438587   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:42:00.438658   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:42:00.458679   37047 logs.go:276] 4 containers: [466d218d39f5 473f7837a8ea c225f01c1531 eb9b3325d858]
	I0513 17:42:00.458752   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:42:00.472225   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:42:00.472298   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:42:00.485522   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:42:00.485590   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:42:00.495982   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:42:00.496040   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:42:00.506527   37047 logs.go:276] 0 containers: []
	W0513 17:42:00.506538   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:42:00.506588   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:42:00.517371   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:42:00.517389   37047 logs.go:123] Gathering logs for coredns [473f7837a8ea] ...
	I0513 17:42:00.517394   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 473f7837a8ea"
	I0513 17:42:00.529381   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:42:00.529389   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:42:00.546639   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:42:00.546647   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:42:00.551269   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:42:00.551278   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:42:00.567955   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:42:00.567963   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:42:00.592522   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:42:00.592528   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:42:00.627121   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:42:00.627130   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:42:00.638757   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:42:00.638766   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:42:00.653158   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:42:00.653172   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:42:00.666835   37047 logs.go:123] Gathering logs for coredns [466d218d39f5] ...
	I0513 17:42:00.666844   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 466d218d39f5"
	I0513 17:42:00.682372   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:42:00.682381   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:42:00.694328   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:42:00.694336   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:42:00.705670   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:42:00.705684   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:42:00.719842   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:42:00.719851   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:42:00.731808   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:42:00.731818   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:42:03.269130   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:42:08.271682   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:42:08.271781   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:42:08.283154   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:42:08.283226   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:42:08.297321   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:42:08.297406   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:42:08.314070   37047 logs.go:276] 4 containers: [466d218d39f5 473f7837a8ea c225f01c1531 eb9b3325d858]
	I0513 17:42:08.314129   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:42:08.330358   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:42:08.330426   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:42:08.342150   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:42:08.342225   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:42:08.354116   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:42:08.354176   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:42:08.365856   37047 logs.go:276] 0 containers: []
	W0513 17:42:08.365870   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:42:08.365909   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:42:08.376717   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:42:08.376733   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:42:08.376738   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:42:08.414328   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:42:08.414336   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:42:08.426981   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:42:08.426993   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:42:08.448081   37047 logs.go:123] Gathering logs for coredns [466d218d39f5] ...
	I0513 17:42:08.448090   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 466d218d39f5"
	I0513 17:42:08.462250   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:42:08.462260   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:42:08.501253   37047 logs.go:123] Gathering logs for coredns [473f7837a8ea] ...
	I0513 17:42:08.501266   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 473f7837a8ea"
	I0513 17:42:08.518492   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:42:08.518503   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:42:08.538074   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:42:08.538085   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:42:08.554884   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:42:08.554895   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:42:08.567694   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:42:08.567711   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:42:08.572289   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:42:08.572301   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:42:08.590121   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:42:08.590138   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:42:08.605286   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:42:08.605299   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:42:08.619130   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:42:08.619144   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:42:08.632369   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:42:08.632380   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:42:11.159347   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:42:16.160426   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:42:16.160687   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:42:16.188441   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:42:16.188555   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:42:16.210574   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:42:16.210649   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:42:16.225896   37047 logs.go:276] 4 containers: [466d218d39f5 473f7837a8ea c225f01c1531 eb9b3325d858]
	I0513 17:42:16.225964   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:42:16.237002   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:42:16.237063   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:42:16.247145   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:42:16.247210   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:42:16.257290   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:42:16.257359   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:42:16.267642   37047 logs.go:276] 0 containers: []
	W0513 17:42:16.267652   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:42:16.267697   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:42:16.278093   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:42:16.278110   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:42:16.278117   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:42:16.282700   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:42:16.282708   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:42:16.298746   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:42:16.298757   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:42:16.314199   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:42:16.314212   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:42:16.332005   37047 logs.go:123] Gathering logs for coredns [466d218d39f5] ...
	I0513 17:42:16.332016   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 466d218d39f5"
	I0513 17:42:16.343498   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:42:16.343511   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:42:16.355240   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:42:16.355251   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:42:16.379677   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:42:16.379683   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:42:16.397180   37047 logs.go:123] Gathering logs for coredns [473f7837a8ea] ...
	I0513 17:42:16.397192   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 473f7837a8ea"
	I0513 17:42:16.408413   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:42:16.408426   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:42:16.419982   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:42:16.419994   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:42:16.436361   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:42:16.436370   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:42:16.451105   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:42:16.451117   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:42:16.468471   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:42:16.468484   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:42:16.504182   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:42:16.504189   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:42:19.044265   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:42:24.046908   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:42:24.047279   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:42:24.091089   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:42:24.091220   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:42:24.115429   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:42:24.115520   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:42:24.129602   37047 logs.go:276] 4 containers: [466d218d39f5 473f7837a8ea c225f01c1531 eb9b3325d858]
	I0513 17:42:24.129675   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:42:24.141429   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:42:24.141499   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:42:24.152297   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:42:24.152361   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:42:24.162849   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:42:24.162917   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:42:24.173080   37047 logs.go:276] 0 containers: []
	W0513 17:42:24.173092   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:42:24.173146   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:42:24.184104   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:42:24.184120   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:42:24.184125   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:42:24.197918   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:42:24.197927   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:42:24.235066   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:42:24.235073   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:42:24.272416   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:42:24.272428   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:42:24.286855   37047 logs.go:123] Gathering logs for coredns [473f7837a8ea] ...
	I0513 17:42:24.286867   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 473f7837a8ea"
	I0513 17:42:24.300600   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:42:24.300610   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:42:24.316745   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:42:24.316756   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:42:24.339711   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:42:24.339718   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:42:24.352219   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:42:24.352230   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:42:24.357007   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:42:24.357013   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:42:24.368741   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:42:24.368752   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:42:24.380290   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:42:24.380303   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:42:24.397492   37047 logs.go:123] Gathering logs for coredns [466d218d39f5] ...
	I0513 17:42:24.397502   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 466d218d39f5"
	I0513 17:42:24.408891   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:42:24.408899   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:42:24.427546   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:42:24.427556   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:42:26.941481   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:42:31.944141   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:42:31.944259   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:42:31.956149   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:42:31.956226   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:42:31.968162   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:42:31.968243   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:42:31.979821   37047 logs.go:276] 4 containers: [466d218d39f5 473f7837a8ea c225f01c1531 eb9b3325d858]
	I0513 17:42:31.979907   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:42:31.999874   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:42:31.999933   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:42:32.011547   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:42:32.011628   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:42:32.022347   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:42:32.022408   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:42:32.033702   37047 logs.go:276] 0 containers: []
	W0513 17:42:32.033716   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:42:32.033770   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:42:32.045057   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:42:32.045074   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:42:32.045080   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:42:32.082631   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:42:32.082653   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:42:32.087264   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:42:32.087275   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:42:32.104203   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:42:32.104216   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:42:32.116479   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:42:32.116488   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:42:32.128058   37047 logs.go:123] Gathering logs for coredns [473f7837a8ea] ...
	I0513 17:42:32.128069   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 473f7837a8ea"
	I0513 17:42:32.147375   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:42:32.147387   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:42:32.186181   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:42:32.186192   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:42:32.199653   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:42:32.199661   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:42:32.216967   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:42:32.216980   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:42:32.229551   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:42:32.229561   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:42:32.245168   37047 logs.go:123] Gathering logs for coredns [466d218d39f5] ...
	I0513 17:42:32.245179   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 466d218d39f5"
	I0513 17:42:32.261397   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:42:32.261407   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:42:32.277618   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:42:32.277631   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:42:32.297477   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:42:32.297487   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:42:34.823938   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:42:39.826516   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:42:39.826650   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:42:39.837854   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:42:39.837915   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:42:39.848170   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:42:39.848227   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:42:39.858735   37047 logs.go:276] 4 containers: [466d218d39f5 473f7837a8ea c225f01c1531 eb9b3325d858]
	I0513 17:42:39.858803   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:42:39.869374   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:42:39.869440   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:42:39.879832   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:42:39.879899   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:42:39.890217   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:42:39.890277   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:42:39.900347   37047 logs.go:276] 0 containers: []
	W0513 17:42:39.900357   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:42:39.900402   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:42:39.910814   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:42:39.910831   37047 logs.go:123] Gathering logs for coredns [473f7837a8ea] ...
	I0513 17:42:39.910836   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 473f7837a8ea"
	I0513 17:42:39.923209   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:42:39.923220   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:42:39.934404   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:42:39.934414   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:42:39.945753   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:42:39.945762   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:42:39.969060   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:42:39.969074   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:42:40.004806   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:42:40.004813   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:42:40.039732   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:42:40.039742   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:42:40.064625   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:42:40.064636   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:42:40.069185   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:42:40.069192   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:42:40.083336   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:42:40.083347   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:42:40.095109   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:42:40.095117   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:42:40.106368   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:42:40.106377   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:42:40.120751   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:42:40.120759   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:42:40.135137   37047 logs.go:123] Gathering logs for coredns [466d218d39f5] ...
	I0513 17:42:40.135150   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 466d218d39f5"
	I0513 17:42:40.146618   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:42:40.146629   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:42:42.660341   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:42:47.663123   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:42:47.663581   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:42:47.703070   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:42:47.703194   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:42:47.726504   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:42:47.726604   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:42:47.743470   37047 logs.go:276] 4 containers: [466d218d39f5 473f7837a8ea c225f01c1531 eb9b3325d858]
	I0513 17:42:47.743543   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:42:47.755932   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:42:47.755995   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:42:47.766999   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:42:47.767064   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:42:47.778059   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:42:47.778126   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:42:47.788796   37047 logs.go:276] 0 containers: []
	W0513 17:42:47.788807   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:42:47.788860   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:42:47.800238   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:42:47.800253   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:42:47.800257   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:42:47.819241   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:42:47.819252   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:42:47.832244   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:42:47.832255   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:42:47.867342   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:42:47.867350   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:42:47.871921   37047 logs.go:123] Gathering logs for coredns [473f7837a8ea] ...
	I0513 17:42:47.871929   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 473f7837a8ea"
	I0513 17:42:47.884121   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:42:47.884131   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:42:47.903897   37047 logs.go:123] Gathering logs for coredns [466d218d39f5] ...
	I0513 17:42:47.903909   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 466d218d39f5"
	I0513 17:42:47.915707   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:42:47.915718   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:42:47.927947   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:42:47.927957   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:42:47.943514   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:42:47.943527   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:42:47.956540   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:42:47.956553   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:42:47.980894   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:42:47.980904   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:42:48.017064   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:42:48.017077   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:42:48.031665   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:42:48.031676   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:42:48.046439   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:42:48.046607   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:42:50.564397   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:42:55.567013   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:42:55.567096   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0513 17:42:55.579368   37047 logs.go:276] 1 containers: [e2a23b9f71ef]
	I0513 17:42:55.579441   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0513 17:42:55.591532   37047 logs.go:276] 1 containers: [3b12602a91d6]
	I0513 17:42:55.591601   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0513 17:42:55.603936   37047 logs.go:276] 4 containers: [466d218d39f5 473f7837a8ea c225f01c1531 eb9b3325d858]
	I0513 17:42:55.604016   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0513 17:42:55.618351   37047 logs.go:276] 1 containers: [99b90e49240b]
	I0513 17:42:55.618443   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0513 17:42:55.630553   37047 logs.go:276] 1 containers: [8f9796d59c8a]
	I0513 17:42:55.630628   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0513 17:42:55.642869   37047 logs.go:276] 1 containers: [0930878ca9f6]
	I0513 17:42:55.642943   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0513 17:42:55.653856   37047 logs.go:276] 0 containers: []
	W0513 17:42:55.653867   37047 logs.go:278] No container was found matching "kindnet"
	I0513 17:42:55.653926   37047 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0513 17:42:55.665989   37047 logs.go:276] 1 containers: [a863eca32690]
	I0513 17:42:55.666006   37047 logs.go:123] Gathering logs for coredns [466d218d39f5] ...
	I0513 17:42:55.666013   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 466d218d39f5"
	I0513 17:42:55.678921   37047 logs.go:123] Gathering logs for coredns [eb9b3325d858] ...
	I0513 17:42:55.678933   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb9b3325d858"
	I0513 17:42:55.693567   37047 logs.go:123] Gathering logs for dmesg ...
	I0513 17:42:55.693578   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0513 17:42:55.698346   37047 logs.go:123] Gathering logs for kube-scheduler [99b90e49240b] ...
	I0513 17:42:55.698357   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99b90e49240b"
	I0513 17:42:55.713509   37047 logs.go:123] Gathering logs for coredns [473f7837a8ea] ...
	I0513 17:42:55.713523   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 473f7837a8ea"
	I0513 17:42:55.727055   37047 logs.go:123] Gathering logs for etcd [3b12602a91d6] ...
	I0513 17:42:55.727068   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b12602a91d6"
	I0513 17:42:55.742607   37047 logs.go:123] Gathering logs for storage-provisioner [a863eca32690] ...
	I0513 17:42:55.742619   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a863eca32690"
	I0513 17:42:55.755301   37047 logs.go:123] Gathering logs for container status ...
	I0513 17:42:55.755310   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0513 17:42:55.767845   37047 logs.go:123] Gathering logs for kubelet ...
	I0513 17:42:55.767857   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0513 17:42:55.806176   37047 logs.go:123] Gathering logs for kube-apiserver [e2a23b9f71ef] ...
	I0513 17:42:55.806188   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2a23b9f71ef"
	I0513 17:42:55.822150   37047 logs.go:123] Gathering logs for coredns [c225f01c1531] ...
	I0513 17:42:55.822161   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c225f01c1531"
	I0513 17:42:55.839250   37047 logs.go:123] Gathering logs for kube-proxy [8f9796d59c8a] ...
	I0513 17:42:55.839260   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f9796d59c8a"
	I0513 17:42:55.855683   37047 logs.go:123] Gathering logs for kube-controller-manager [0930878ca9f6] ...
	I0513 17:42:55.855695   37047 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0930878ca9f6"
	I0513 17:42:55.875032   37047 logs.go:123] Gathering logs for Docker ...
	I0513 17:42:55.875043   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0513 17:42:55.900358   37047 logs.go:123] Gathering logs for describe nodes ...
	I0513 17:42:55.900373   37047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0513 17:42:58.442667   37047 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0513 17:43:03.445384   37047 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0513 17:43:03.451408   37047 out.go:177] 
	W0513 17:43:03.455489   37047 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0513 17:43:03.455518   37047 out.go:239] * 
	* 
	W0513 17:43:03.456941   37047 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:43:03.472368   37047 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-201000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (564.96s)

                                                
                                    
x
+
TestPause/serial/Start (9.78s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-947000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-947000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.719242708s)

                                                
                                                
-- stdout --
	* [pause-947000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-947000" primary control-plane node in "pause-947000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-947000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-947000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-947000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-947000 -n pause-947000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-947000 -n pause-947000: exit status 7 (62.890875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-947000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-839000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-839000 --driver=qemu2 : exit status 80 (9.837387041s)

                                                
                                                
-- stdout --
	* [NoKubernetes-839000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-839000" primary control-plane node in "NoKubernetes-839000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-839000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-839000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-839000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000: exit status 7 (44.45075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-839000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --driver=qemu2 : exit status 80 (5.238290125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-839000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-839000
	* Restarting existing qemu2 VM for "NoKubernetes-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-839000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000: exit status 7 (56.5335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-839000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --driver=qemu2 : exit status 80 (5.241423209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-839000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-839000
	* Restarting existing qemu2 VM for "NoKubernetes-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-839000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000: exit status 7 (48.275541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-839000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-839000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-839000 --driver=qemu2 : exit status 80 (5.278879041s)

                                                
                                                
-- stdout --
	* [NoKubernetes-839000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-839000
	* Restarting existing qemu2 VM for "NoKubernetes-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-839000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-839000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-839000 -n NoKubernetes-839000: exit status 7 (30.888666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-839000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.961078209s)

                                                
                                                
-- stdout --
	* [auto-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-748000" primary control-plane node in "auto-748000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-748000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:41:45.167113   37321 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:41:45.167250   37321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:41:45.167252   37321 out.go:304] Setting ErrFile to fd 2...
	I0513 17:41:45.167255   37321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:41:45.167386   37321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:41:45.168523   37321 out.go:298] Setting JSON to false
	I0513 17:41:45.185062   37321 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27675,"bootTime":1715619630,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:41:45.185134   37321 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:41:45.191791   37321 out.go:177] * [auto-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:41:45.199677   37321 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:41:45.202608   37321 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:41:45.199712   37321 notify.go:220] Checking for updates...
	I0513 17:41:45.208663   37321 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:41:45.210118   37321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:41:45.213667   37321 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:41:45.216674   37321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:41:45.220072   37321 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:41:45.220135   37321 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:41:45.220182   37321 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:41:45.224666   37321 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:41:45.231684   37321 start.go:297] selected driver: qemu2
	I0513 17:41:45.231693   37321 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:41:45.231700   37321 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:41:45.233947   37321 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:41:45.236644   37321 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:41:45.239726   37321 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:41:45.239741   37321 cni.go:84] Creating CNI manager for ""
	I0513 17:41:45.239748   37321 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:41:45.239751   37321 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:41:45.239778   37321 start.go:340] cluster config:
	{Name:auto-748000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/
run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:41:45.244197   37321 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:41:45.251643   37321 out.go:177] * Starting "auto-748000" primary control-plane node in "auto-748000" cluster
	I0513 17:41:45.255709   37321 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:41:45.255723   37321 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:41:45.255732   37321 cache.go:56] Caching tarball of preloaded images
	I0513 17:41:45.255786   37321 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:41:45.255796   37321 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:41:45.255862   37321 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/auto-748000/config.json ...
	I0513 17:41:45.255878   37321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/auto-748000/config.json: {Name:mkec7efa29cbbe7e68930eb873cac4a9a085a5e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:41:45.256253   37321 start.go:360] acquireMachinesLock for auto-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:41:45.256285   37321 start.go:364] duration metric: took 25.459µs to acquireMachinesLock for "auto-748000"
	I0513 17:41:45.256296   37321 start.go:93] Provisioning new machine with config: &{Name:auto-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-748000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:41:45.256339   37321 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:41:45.263671   37321 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:41:45.278632   37321 start.go:159] libmachine.API.Create for "auto-748000" (driver="qemu2")
	I0513 17:41:45.278658   37321 client.go:168] LocalClient.Create starting
	I0513 17:41:45.278710   37321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:41:45.278739   37321 main.go:141] libmachine: Decoding PEM data...
	I0513 17:41:45.278755   37321 main.go:141] libmachine: Parsing certificate...
	I0513 17:41:45.278795   37321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:41:45.278817   37321 main.go:141] libmachine: Decoding PEM data...
	I0513 17:41:45.278824   37321 main.go:141] libmachine: Parsing certificate...
	I0513 17:41:45.279194   37321 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:41:45.423730   37321 main.go:141] libmachine: Creating SSH key...
	I0513 17:41:45.707683   37321 main.go:141] libmachine: Creating Disk image...
	I0513 17:41:45.707707   37321 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:41:45.707978   37321 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/disk.qcow2
	I0513 17:41:45.721443   37321 main.go:141] libmachine: STDOUT: 
	I0513 17:41:45.721473   37321 main.go:141] libmachine: STDERR: 
	I0513 17:41:45.721548   37321 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/disk.qcow2 +20000M
	I0513 17:41:45.732971   37321 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:41:45.732999   37321 main.go:141] libmachine: STDERR: 
	I0513 17:41:45.733016   37321 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/disk.qcow2
	I0513 17:41:45.733021   37321 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:41:45.733050   37321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:b3:71:59:ce:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/disk.qcow2
	I0513 17:41:45.734847   37321 main.go:141] libmachine: STDOUT: 
	I0513 17:41:45.734866   37321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:41:45.734886   37321 client.go:171] duration metric: took 456.232916ms to LocalClient.Create
	I0513 17:41:47.737072   37321 start.go:128] duration metric: took 2.48074575s to createHost
	I0513 17:41:47.737195   37321 start.go:83] releasing machines lock for "auto-748000", held for 2.480938208s
	W0513 17:41:47.737287   37321 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:41:47.747967   37321 out.go:177] * Deleting "auto-748000" in qemu2 ...
	W0513 17:41:47.778447   37321 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:41:47.778488   37321 start.go:728] Will try again in 5 seconds ...
	I0513 17:41:52.780478   37321 start.go:360] acquireMachinesLock for auto-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:41:52.780584   37321 start.go:364] duration metric: took 90µs to acquireMachinesLock for "auto-748000"
	I0513 17:41:52.780603   37321 start.go:93] Provisioning new machine with config: &{Name:auto-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-748000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:41:52.780663   37321 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:41:52.788841   37321 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:41:52.804047   37321 start.go:159] libmachine.API.Create for "auto-748000" (driver="qemu2")
	I0513 17:41:52.804072   37321 client.go:168] LocalClient.Create starting
	I0513 17:41:52.804142   37321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:41:52.804178   37321 main.go:141] libmachine: Decoding PEM data...
	I0513 17:41:52.804186   37321 main.go:141] libmachine: Parsing certificate...
	I0513 17:41:52.804220   37321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:41:52.804246   37321 main.go:141] libmachine: Decoding PEM data...
	I0513 17:41:52.804252   37321 main.go:141] libmachine: Parsing certificate...
	I0513 17:41:52.804538   37321 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:41:52.949757   37321 main.go:141] libmachine: Creating SSH key...
	I0513 17:41:53.036313   37321 main.go:141] libmachine: Creating Disk image...
	I0513 17:41:53.036323   37321 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:41:53.036535   37321 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/disk.qcow2
	I0513 17:41:53.049484   37321 main.go:141] libmachine: STDOUT: 
	I0513 17:41:53.049515   37321 main.go:141] libmachine: STDERR: 
	I0513 17:41:53.049591   37321 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/disk.qcow2 +20000M
	I0513 17:41:53.060929   37321 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:41:53.060947   37321 main.go:141] libmachine: STDERR: 
	I0513 17:41:53.060961   37321 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/disk.qcow2
	I0513 17:41:53.060965   37321 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:41:53.060998   37321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:49:70:f6:26:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/auto-748000/disk.qcow2
	I0513 17:41:53.062870   37321 main.go:141] libmachine: STDOUT: 
	I0513 17:41:53.062885   37321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:41:53.062898   37321 client.go:171] duration metric: took 258.827917ms to LocalClient.Create
	I0513 17:41:55.065183   37321 start.go:128] duration metric: took 2.284523167s to createHost
	I0513 17:41:55.065291   37321 start.go:83] releasing machines lock for "auto-748000", held for 2.284740208s
	W0513 17:41:55.065589   37321 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:41:55.073948   37321 out.go:177] 
	W0513 17:41:55.080151   37321 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:41:55.080188   37321 out.go:239] * 
	* 
	W0513 17:41:55.081719   37321 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:41:55.091892   37321 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.030115708s)

                                                
                                                
-- stdout --
	* [kindnet-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-748000" primary control-plane node in "kindnet-748000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-748000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:41:57.211637   37431 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:41:57.211758   37431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:41:57.211761   37431 out.go:304] Setting ErrFile to fd 2...
	I0513 17:41:57.211772   37431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:41:57.211885   37431 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:41:57.213038   37431 out.go:298] Setting JSON to false
	I0513 17:41:57.229314   37431 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27687,"bootTime":1715619630,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:41:57.229376   37431 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:41:57.235723   37431 out.go:177] * [kindnet-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:41:57.242655   37431 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:41:57.246710   37431 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:41:57.242735   37431 notify.go:220] Checking for updates...
	I0513 17:41:57.249697   37431 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:41:57.252693   37431 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:41:57.255721   37431 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:41:57.258711   37431 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:41:57.261978   37431 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:41:57.262052   37431 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:41:57.262107   37431 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:41:57.266608   37431 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:41:57.273689   37431 start.go:297] selected driver: qemu2
	I0513 17:41:57.273698   37431 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:41:57.273705   37431 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:41:57.275921   37431 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:41:57.279653   37431 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:41:57.282748   37431 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:41:57.282765   37431 cni.go:84] Creating CNI manager for "kindnet"
	I0513 17:41:57.282768   37431 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0513 17:41:57.282805   37431 start.go:340] cluster config:
	{Name:kindnet-748000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVM
netPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:41:57.287341   37431 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:41:57.291658   37431 out.go:177] * Starting "kindnet-748000" primary control-plane node in "kindnet-748000" cluster
	I0513 17:41:57.299484   37431 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:41:57.299503   37431 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:41:57.299512   37431 cache.go:56] Caching tarball of preloaded images
	I0513 17:41:57.299572   37431 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:41:57.299578   37431 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:41:57.299635   37431 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/kindnet-748000/config.json ...
	I0513 17:41:57.299645   37431 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/kindnet-748000/config.json: {Name:mkaeda19e633b137969ff8aa3772175bb027d5ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:41:57.299942   37431 start.go:360] acquireMachinesLock for kindnet-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:41:57.299979   37431 start.go:364] duration metric: took 30.834µs to acquireMachinesLock for "kindnet-748000"
	I0513 17:41:57.299993   37431 start.go:93] Provisioning new machine with config: &{Name:kindnet-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-748000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:41:57.300025   37431 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:41:57.307674   37431 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:41:57.322709   37431 start.go:159] libmachine.API.Create for "kindnet-748000" (driver="qemu2")
	I0513 17:41:57.322733   37431 client.go:168] LocalClient.Create starting
	I0513 17:41:57.322787   37431 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:41:57.322824   37431 main.go:141] libmachine: Decoding PEM data...
	I0513 17:41:57.322837   37431 main.go:141] libmachine: Parsing certificate...
	I0513 17:41:57.322885   37431 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:41:57.322909   37431 main.go:141] libmachine: Decoding PEM data...
	I0513 17:41:57.322918   37431 main.go:141] libmachine: Parsing certificate...
	I0513 17:41:57.323290   37431 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:41:57.467968   37431 main.go:141] libmachine: Creating SSH key...
	I0513 17:41:57.594643   37431 main.go:141] libmachine: Creating Disk image...
	I0513 17:41:57.594649   37431 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:41:57.594852   37431 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/disk.qcow2
	I0513 17:41:57.608021   37431 main.go:141] libmachine: STDOUT: 
	I0513 17:41:57.608043   37431 main.go:141] libmachine: STDERR: 
	I0513 17:41:57.608103   37431 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/disk.qcow2 +20000M
	I0513 17:41:57.619422   37431 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:41:57.619437   37431 main.go:141] libmachine: STDERR: 
	I0513 17:41:57.619452   37431 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/disk.qcow2
	I0513 17:41:57.619457   37431 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:41:57.619489   37431 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:6a:6c:53:fb:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/disk.qcow2
	I0513 17:41:57.621154   37431 main.go:141] libmachine: STDOUT: 
	I0513 17:41:57.621170   37431 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:41:57.621188   37431 client.go:171] duration metric: took 298.456875ms to LocalClient.Create
	I0513 17:41:59.623419   37431 start.go:128] duration metric: took 2.323409334s to createHost
	I0513 17:41:59.623509   37431 start.go:83] releasing machines lock for "kindnet-748000", held for 2.323566917s
	W0513 17:41:59.623601   37431 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:41:59.635117   37431 out.go:177] * Deleting "kindnet-748000" in qemu2 ...
	W0513 17:41:59.665946   37431 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:41:59.665981   37431 start.go:728] Will try again in 5 seconds ...
	I0513 17:42:04.666349   37431 start.go:360] acquireMachinesLock for kindnet-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:42:04.666901   37431 start.go:364] duration metric: took 422.667µs to acquireMachinesLock for "kindnet-748000"
	I0513 17:42:04.667041   37431 start.go:93] Provisioning new machine with config: &{Name:kindnet-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-748000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:42:04.667312   37431 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:42:04.676796   37431 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:42:04.720254   37431 start.go:159] libmachine.API.Create for "kindnet-748000" (driver="qemu2")
	I0513 17:42:04.720312   37431 client.go:168] LocalClient.Create starting
	I0513 17:42:04.720446   37431 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:42:04.720513   37431 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:04.720529   37431 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:04.720603   37431 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:42:04.720646   37431 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:04.720660   37431 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:04.721215   37431 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:42:04.873616   37431 main.go:141] libmachine: Creating SSH key...
	I0513 17:42:05.141968   37431 main.go:141] libmachine: Creating Disk image...
	I0513 17:42:05.141979   37431 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:42:05.142215   37431 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/disk.qcow2
	I0513 17:42:05.155292   37431 main.go:141] libmachine: STDOUT: 
	I0513 17:42:05.155313   37431 main.go:141] libmachine: STDERR: 
	I0513 17:42:05.155409   37431 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/disk.qcow2 +20000M
	I0513 17:42:05.166734   37431 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:42:05.166751   37431 main.go:141] libmachine: STDERR: 
	I0513 17:42:05.166763   37431 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/disk.qcow2
	I0513 17:42:05.166772   37431 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:42:05.166822   37431 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:23:36:7d:a7:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kindnet-748000/disk.qcow2
	I0513 17:42:05.168600   37431 main.go:141] libmachine: STDOUT: 
	I0513 17:42:05.168615   37431 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:42:05.168629   37431 client.go:171] duration metric: took 448.320458ms to LocalClient.Create
	I0513 17:42:07.170793   37431 start.go:128] duration metric: took 2.503483542s to createHost
	I0513 17:42:07.170882   37431 start.go:83] releasing machines lock for "kindnet-748000", held for 2.504012541s
	W0513 17:42:07.171229   37431 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:07.181830   37431 out.go:177] 
	W0513 17:42:07.189096   37431 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:42:07.189147   37431 out.go:239] * 
	* 
	W0513 17:42:07.191177   37431 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:42:07.200932   37431 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.843766042s)

                                                
                                                
-- stdout --
	* [calico-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-748000" primary control-plane node in "calico-748000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-748000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:42:09.449874   37548 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:42:09.450010   37548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:42:09.450013   37548 out.go:304] Setting ErrFile to fd 2...
	I0513 17:42:09.450015   37548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:42:09.450146   37548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:42:09.451256   37548 out.go:298] Setting JSON to false
	I0513 17:42:09.467713   37548 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27699,"bootTime":1715619630,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:42:09.467775   37548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:42:09.472847   37548 out.go:177] * [calico-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:42:09.480717   37548 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:42:09.485715   37548 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:42:09.480804   37548 notify.go:220] Checking for updates...
	I0513 17:42:09.489630   37548 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:42:09.492748   37548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:42:09.495736   37548 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:42:09.498709   37548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:42:09.502058   37548 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:42:09.502123   37548 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:42:09.502171   37548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:42:09.506718   37548 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:42:09.513693   37548 start.go:297] selected driver: qemu2
	I0513 17:42:09.513703   37548 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:42:09.513715   37548 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:42:09.515868   37548 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:42:09.518707   37548 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:42:09.521688   37548 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:42:09.521703   37548 cni.go:84] Creating CNI manager for "calico"
	I0513 17:42:09.521706   37548 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0513 17:42:09.521737   37548 start.go:340] cluster config:
	{Name:calico-748000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnet
Path:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:42:09.525933   37548 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:42:09.532537   37548 out.go:177] * Starting "calico-748000" primary control-plane node in "calico-748000" cluster
	I0513 17:42:09.536719   37548 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:42:09.536734   37548 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:42:09.536745   37548 cache.go:56] Caching tarball of preloaded images
	I0513 17:42:09.536804   37548 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:42:09.536811   37548 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:42:09.536872   37548 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/calico-748000/config.json ...
	I0513 17:42:09.536885   37548 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/calico-748000/config.json: {Name:mk88114c29822283b47323e563ab1509f5e2cb3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:42:09.537284   37548 start.go:360] acquireMachinesLock for calico-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:42:09.537314   37548 start.go:364] duration metric: took 25.459µs to acquireMachinesLock for "calico-748000"
	I0513 17:42:09.537327   37548 start.go:93] Provisioning new machine with config: &{Name:calico-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-748000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:42:09.537352   37548 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:42:09.540640   37548 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:42:09.556052   37548 start.go:159] libmachine.API.Create for "calico-748000" (driver="qemu2")
	I0513 17:42:09.556076   37548 client.go:168] LocalClient.Create starting
	I0513 17:42:09.556154   37548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:42:09.556185   37548 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:09.556197   37548 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:09.556235   37548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:42:09.556257   37548 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:09.556262   37548 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:09.556595   37548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:42:09.701022   37548 main.go:141] libmachine: Creating SSH key...
	I0513 17:42:09.845827   37548 main.go:141] libmachine: Creating Disk image...
	I0513 17:42:09.845836   37548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:42:09.846052   37548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/disk.qcow2
	I0513 17:42:09.859134   37548 main.go:141] libmachine: STDOUT: 
	I0513 17:42:09.859155   37548 main.go:141] libmachine: STDERR: 
	I0513 17:42:09.859214   37548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/disk.qcow2 +20000M
	I0513 17:42:09.870411   37548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:42:09.870427   37548 main.go:141] libmachine: STDERR: 
	I0513 17:42:09.870440   37548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/disk.qcow2
	I0513 17:42:09.870444   37548 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:42:09.870477   37548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:17:f8:fa:30:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/disk.qcow2
	I0513 17:42:09.872132   37548 main.go:141] libmachine: STDOUT: 
	I0513 17:42:09.872151   37548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:42:09.872173   37548 client.go:171] duration metric: took 316.098416ms to LocalClient.Create
	I0513 17:42:11.874359   37548 start.go:128] duration metric: took 2.337021333s to createHost
	I0513 17:42:11.874456   37548 start.go:83] releasing machines lock for "calico-748000", held for 2.337178875s
	W0513 17:42:11.874556   37548 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:11.879290   37548 out.go:177] * Deleting "calico-748000" in qemu2 ...
	W0513 17:42:11.913288   37548 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:11.913321   37548 start.go:728] Will try again in 5 seconds ...
	I0513 17:42:16.915384   37548 start.go:360] acquireMachinesLock for calico-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:42:16.915676   37548 start.go:364] duration metric: took 240.125µs to acquireMachinesLock for "calico-748000"
	I0513 17:42:16.915716   37548 start.go:93] Provisioning new machine with config: &{Name:calico-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-748000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:42:16.915862   37548 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:42:16.924193   37548 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:42:16.956677   37548 start.go:159] libmachine.API.Create for "calico-748000" (driver="qemu2")
	I0513 17:42:16.956732   37548 client.go:168] LocalClient.Create starting
	I0513 17:42:16.956843   37548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:42:16.956905   37548 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:16.956925   37548 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:16.956986   37548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:42:16.957026   37548 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:16.957040   37548 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:16.957678   37548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:42:17.106157   37548 main.go:141] libmachine: Creating SSH key...
	I0513 17:42:17.193257   37548 main.go:141] libmachine: Creating Disk image...
	I0513 17:42:17.193263   37548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:42:17.193459   37548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/disk.qcow2
	I0513 17:42:17.206387   37548 main.go:141] libmachine: STDOUT: 
	I0513 17:42:17.206413   37548 main.go:141] libmachine: STDERR: 
	I0513 17:42:17.206472   37548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/disk.qcow2 +20000M
	I0513 17:42:17.217383   37548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:42:17.217399   37548 main.go:141] libmachine: STDERR: 
	I0513 17:42:17.217418   37548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/disk.qcow2
	I0513 17:42:17.217423   37548 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:42:17.217450   37548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:21:b4:97:5e:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/calico-748000/disk.qcow2
	I0513 17:42:17.219144   37548 main.go:141] libmachine: STDOUT: 
	I0513 17:42:17.219157   37548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:42:17.219170   37548 client.go:171] duration metric: took 262.439625ms to LocalClient.Create
	I0513 17:42:19.221334   37548 start.go:128] duration metric: took 2.30547725s to createHost
	I0513 17:42:19.221407   37548 start.go:83] releasing machines lock for "calico-748000", held for 2.305759875s
	W0513 17:42:19.221807   37548 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:19.235429   37548 out.go:177] 
	W0513 17:42:19.238463   37548 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:42:19.238485   37548 out.go:239] * 
	* 
	W0513 17:42:19.241316   37548 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:42:19.253391   37548 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.732885459s)

                                                
                                                
-- stdout --
	* [custom-flannel-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-748000" primary control-plane node in "custom-flannel-748000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-748000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:42:21.619775   37670 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:42:21.619896   37670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:42:21.619899   37670 out.go:304] Setting ErrFile to fd 2...
	I0513 17:42:21.619909   37670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:42:21.620049   37670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:42:21.621208   37670 out.go:298] Setting JSON to false
	I0513 17:42:21.637575   37670 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27711,"bootTime":1715619630,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:42:21.637680   37670 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:42:21.643634   37670 out.go:177] * [custom-flannel-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:42:21.651667   37670 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:42:21.651701   37670 notify.go:220] Checking for updates...
	I0513 17:42:21.657590   37670 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:42:21.660617   37670 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:42:21.662193   37670 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:42:21.665598   37670 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:42:21.668578   37670 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:42:21.672055   37670 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:42:21.672119   37670 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:42:21.672180   37670 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:42:21.676628   37670 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:42:21.683591   37670 start.go:297] selected driver: qemu2
	I0513 17:42:21.683601   37670 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:42:21.683611   37670 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:42:21.685854   37670 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:42:21.688563   37670 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:42:21.691697   37670 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:42:21.691713   37670 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0513 17:42:21.691721   37670 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0513 17:42:21.691755   37670 start.go:340] cluster config:
	{Name:custom-flannel-748000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet
/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:42:21.696196   37670 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:42:21.703624   37670 out.go:177] * Starting "custom-flannel-748000" primary control-plane node in "custom-flannel-748000" cluster
	I0513 17:42:21.707553   37670 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:42:21.707568   37670 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:42:21.707578   37670 cache.go:56] Caching tarball of preloaded images
	I0513 17:42:21.707633   37670 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:42:21.707639   37670 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:42:21.707713   37670 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/custom-flannel-748000/config.json ...
	I0513 17:42:21.707725   37670 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/custom-flannel-748000/config.json: {Name:mk8e59682273927eb4f54208f0def66b1ae2a934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:42:21.707931   37670 start.go:360] acquireMachinesLock for custom-flannel-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:42:21.707964   37670 start.go:364] duration metric: took 26.166µs to acquireMachinesLock for "custom-flannel-748000"
	I0513 17:42:21.707977   37670 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:custom-flann
el-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:42:21.708025   37670 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:42:21.716615   37670 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:42:21.733095   37670 start.go:159] libmachine.API.Create for "custom-flannel-748000" (driver="qemu2")
	I0513 17:42:21.733123   37670 client.go:168] LocalClient.Create starting
	I0513 17:42:21.733181   37670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:42:21.733211   37670 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:21.733224   37670 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:21.733264   37670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:42:21.733286   37670 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:21.733293   37670 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:21.733630   37670 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:42:21.877658   37670 main.go:141] libmachine: Creating SSH key...
	I0513 17:42:21.906023   37670 main.go:141] libmachine: Creating Disk image...
	I0513 17:42:21.906027   37670 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:42:21.906215   37670 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/disk.qcow2
	I0513 17:42:21.918609   37670 main.go:141] libmachine: STDOUT: 
	I0513 17:42:21.918632   37670 main.go:141] libmachine: STDERR: 
	I0513 17:42:21.918689   37670 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/disk.qcow2 +20000M
	I0513 17:42:21.930020   37670 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:42:21.930053   37670 main.go:141] libmachine: STDERR: 
	I0513 17:42:21.930070   37670 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/disk.qcow2
	I0513 17:42:21.930076   37670 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:42:21.930106   37670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:e3:b0:bc:66:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/disk.qcow2
	I0513 17:42:21.931840   37670 main.go:141] libmachine: STDOUT: 
	I0513 17:42:21.931855   37670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:42:21.931878   37670 client.go:171] duration metric: took 198.754542ms to LocalClient.Create
	I0513 17:42:23.934043   37670 start.go:128] duration metric: took 2.226037583s to createHost
	I0513 17:42:23.934151   37670 start.go:83] releasing machines lock for "custom-flannel-748000", held for 2.2262225s
	W0513 17:42:23.934209   37670 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:23.945475   37670 out.go:177] * Deleting "custom-flannel-748000" in qemu2 ...
	W0513 17:42:23.976345   37670 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:23.976377   37670 start.go:728] Will try again in 5 seconds ...
	I0513 17:42:28.978452   37670 start.go:360] acquireMachinesLock for custom-flannel-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:42:28.979091   37670 start.go:364] duration metric: took 524.333µs to acquireMachinesLock for "custom-flannel-748000"
	I0513 17:42:28.979205   37670 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:custom-flann
el-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:42:28.979509   37670 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:42:28.989345   37670 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:42:29.041903   37670 start.go:159] libmachine.API.Create for "custom-flannel-748000" (driver="qemu2")
	I0513 17:42:29.041958   37670 client.go:168] LocalClient.Create starting
	I0513 17:42:29.042098   37670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:42:29.042164   37670 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:29.042184   37670 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:29.042243   37670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:42:29.042288   37670 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:29.042301   37670 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:29.042850   37670 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:42:29.196006   37670 main.go:141] libmachine: Creating SSH key...
	I0513 17:42:29.258530   37670 main.go:141] libmachine: Creating Disk image...
	I0513 17:42:29.258535   37670 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:42:29.258726   37670 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/disk.qcow2
	I0513 17:42:29.271490   37670 main.go:141] libmachine: STDOUT: 
	I0513 17:42:29.271514   37670 main.go:141] libmachine: STDERR: 
	I0513 17:42:29.271584   37670 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/disk.qcow2 +20000M
	I0513 17:42:29.282979   37670 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:42:29.282995   37670 main.go:141] libmachine: STDERR: 
	I0513 17:42:29.283010   37670 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/disk.qcow2
	I0513 17:42:29.283014   37670 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:42:29.283049   37670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:9e:1d:c6:97:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/custom-flannel-748000/disk.qcow2
	I0513 17:42:29.284714   37670 main.go:141] libmachine: STDOUT: 
	I0513 17:42:29.284729   37670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:42:29.284740   37670 client.go:171] duration metric: took 242.78225ms to LocalClient.Create
	I0513 17:42:31.286904   37670 start.go:128] duration metric: took 2.307400833s to createHost
	I0513 17:42:31.286995   37670 start.go:83] releasing machines lock for "custom-flannel-748000", held for 2.307906s
	W0513 17:42:31.287518   37670 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:31.298311   37670 out.go:177] 
	W0513 17:42:31.302500   37670 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:42:31.302528   37670 out.go:239] * 
	* 
	W0513 17:42:31.304190   37670 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:42:31.314234   37670 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.839103167s)

                                                
                                                
-- stdout --
	* [false-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-748000" primary control-plane node in "false-748000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-748000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:42:33.715978   37788 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:42:33.716119   37788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:42:33.716122   37788 out.go:304] Setting ErrFile to fd 2...
	I0513 17:42:33.716124   37788 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:42:33.716252   37788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:42:33.717336   37788 out.go:298] Setting JSON to false
	I0513 17:42:33.733598   37788 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27723,"bootTime":1715619630,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:42:33.733669   37788 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:42:33.739578   37788 out.go:177] * [false-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:42:33.747586   37788 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:42:33.747637   37788 notify.go:220] Checking for updates...
	I0513 17:42:33.754489   37788 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:42:33.761653   37788 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:42:33.764489   37788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:42:33.767498   37788 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:42:33.770496   37788 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:42:33.773870   37788 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:42:33.773942   37788 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:42:33.773991   37788 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:42:33.778493   37788 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:42:33.785521   37788 start.go:297] selected driver: qemu2
	I0513 17:42:33.785531   37788 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:42:33.785539   37788 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:42:33.787689   37788 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:42:33.790558   37788 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:42:33.793560   37788 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:42:33.793577   37788 cni.go:84] Creating CNI manager for "false"
	I0513 17:42:33.793603   37788 start.go:340] cluster config:
	{Name:false-748000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:false-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/
var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:42:33.797857   37788 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:42:33.805490   37788 out.go:177] * Starting "false-748000" primary control-plane node in "false-748000" cluster
	I0513 17:42:33.809523   37788 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:42:33.809543   37788 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:42:33.809553   37788 cache.go:56] Caching tarball of preloaded images
	I0513 17:42:33.809631   37788 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:42:33.809637   37788 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:42:33.809702   37788 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/false-748000/config.json ...
	I0513 17:42:33.809714   37788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/false-748000/config.json: {Name:mkf9af1ac0e21336f5fd826c4152eba6abc052d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:42:33.810132   37788 start.go:360] acquireMachinesLock for false-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:42:33.810166   37788 start.go:364] duration metric: took 27.666µs to acquireMachinesLock for "false-748000"
	I0513 17:42:33.810178   37788 start.go:93] Provisioning new machine with config: &{Name:false-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:false-748000 Namespac
e:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:42:33.810216   37788 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:42:33.817573   37788 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:42:33.832249   37788 start.go:159] libmachine.API.Create for "false-748000" (driver="qemu2")
	I0513 17:42:33.832274   37788 client.go:168] LocalClient.Create starting
	I0513 17:42:33.832338   37788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:42:33.832369   37788 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:33.832382   37788 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:33.832421   37788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:42:33.832444   37788 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:33.832452   37788 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:33.832769   37788 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:42:33.978261   37788 main.go:141] libmachine: Creating SSH key...
	I0513 17:42:34.111643   37788 main.go:141] libmachine: Creating Disk image...
	I0513 17:42:34.111651   37788 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:42:34.111862   37788 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/disk.qcow2
	I0513 17:42:34.124666   37788 main.go:141] libmachine: STDOUT: 
	I0513 17:42:34.124688   37788 main.go:141] libmachine: STDERR: 
	I0513 17:42:34.124763   37788 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/disk.qcow2 +20000M
	I0513 17:42:34.135729   37788 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:42:34.135746   37788 main.go:141] libmachine: STDERR: 
	I0513 17:42:34.135765   37788 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/disk.qcow2
	I0513 17:42:34.135772   37788 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:42:34.135822   37788 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:03:7c:17:79:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/disk.qcow2
	I0513 17:42:34.137505   37788 main.go:141] libmachine: STDOUT: 
	I0513 17:42:34.137521   37788 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:42:34.137540   37788 client.go:171] duration metric: took 305.266875ms to LocalClient.Create
	I0513 17:42:36.139595   37788 start.go:128] duration metric: took 2.329412959s to createHost
	I0513 17:42:36.139626   37788 start.go:83] releasing machines lock for "false-748000", held for 2.329501125s
	W0513 17:42:36.139678   37788 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:36.152622   37788 out.go:177] * Deleting "false-748000" in qemu2 ...
	W0513 17:42:36.173403   37788 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:36.173415   37788 start.go:728] Will try again in 5 seconds ...
	I0513 17:42:41.174473   37788 start.go:360] acquireMachinesLock for false-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:42:41.175014   37788 start.go:364] duration metric: took 443.958µs to acquireMachinesLock for "false-748000"
	I0513 17:42:41.175130   37788 start.go:93] Provisioning new machine with config: &{Name:false-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:false-748000 Namespac
e:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:42:41.175356   37788 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:42:41.184606   37788 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:42:41.234383   37788 start.go:159] libmachine.API.Create for "false-748000" (driver="qemu2")
	I0513 17:42:41.234445   37788 client.go:168] LocalClient.Create starting
	I0513 17:42:41.234577   37788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:42:41.234648   37788 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:41.234662   37788 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:41.234726   37788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:42:41.234772   37788 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:41.234784   37788 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:41.235328   37788 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:42:41.389382   37788 main.go:141] libmachine: Creating SSH key...
	I0513 17:42:41.452447   37788 main.go:141] libmachine: Creating Disk image...
	I0513 17:42:41.452454   37788 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:42:41.452645   37788 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/disk.qcow2
	I0513 17:42:41.465378   37788 main.go:141] libmachine: STDOUT: 
	I0513 17:42:41.465403   37788 main.go:141] libmachine: STDERR: 
	I0513 17:42:41.465485   37788 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/disk.qcow2 +20000M
	I0513 17:42:41.476922   37788 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:42:41.476939   37788 main.go:141] libmachine: STDERR: 
	I0513 17:42:41.476958   37788 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/disk.qcow2
	I0513 17:42:41.476963   37788 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:42:41.476999   37788 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:8f:82:a0:c9:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/false-748000/disk.qcow2
	I0513 17:42:41.478834   37788 main.go:141] libmachine: STDOUT: 
	I0513 17:42:41.478857   37788 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:42:41.478873   37788 client.go:171] duration metric: took 244.42575ms to LocalClient.Create
	I0513 17:42:43.481041   37788 start.go:128] duration metric: took 2.305691917s to createHost
	I0513 17:42:43.481115   37788 start.go:83] releasing machines lock for "false-748000", held for 2.306123667s
	W0513 17:42:43.481539   37788 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:43.495262   37788 out.go:177] 
	W0513 17:42:43.499388   37788 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:42:43.499414   37788 out.go:239] * 
	* 
	W0513 17:42:43.502300   37788 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:42:43.513133   37788 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.698952667s)

                                                
                                                
-- stdout --
	* [enable-default-cni-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-748000" primary control-plane node in "enable-default-cni-748000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-748000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:42:45.716946   37903 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:42:45.717080   37903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:42:45.717084   37903 out.go:304] Setting ErrFile to fd 2...
	I0513 17:42:45.717087   37903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:42:45.717209   37903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:42:45.718294   37903 out.go:298] Setting JSON to false
	I0513 17:42:45.734842   37903 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27735,"bootTime":1715619630,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:42:45.734945   37903 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:42:45.742179   37903 out.go:177] * [enable-default-cni-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:42:45.749159   37903 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:42:45.749205   37903 notify.go:220] Checking for updates...
	I0513 17:42:45.754138   37903 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:42:45.757162   37903 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:42:45.761135   37903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:42:45.768092   37903 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:42:45.771142   37903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:42:45.774427   37903 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:42:45.774492   37903 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:42:45.774545   37903 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:42:45.779114   37903 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:42:45.786093   37903 start.go:297] selected driver: qemu2
	I0513 17:42:45.786101   37903 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:42:45.786108   37903 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:42:45.788396   37903 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:42:45.792156   37903 out.go:177] * Automatically selected the socket_vmnet network
	E0513 17:42:45.796137   37903 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0513 17:42:45.796158   37903 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:42:45.796177   37903 cni.go:84] Creating CNI manager for "bridge"
	I0513 17:42:45.796184   37903 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:42:45.796213   37903 start.go:340] cluster config:
	{Name:enable-default-cni-748000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:42:45.801170   37903 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:42:45.807081   37903 out.go:177] * Starting "enable-default-cni-748000" primary control-plane node in "enable-default-cni-748000" cluster
	I0513 17:42:45.811080   37903 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:42:45.811098   37903 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:42:45.811116   37903 cache.go:56] Caching tarball of preloaded images
	I0513 17:42:45.811199   37903 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:42:45.811212   37903 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:42:45.811277   37903 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/enable-default-cni-748000/config.json ...
	I0513 17:42:45.811289   37903 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/enable-default-cni-748000/config.json: {Name:mk894798593467a6b1334bb816ea4b0bd909b947 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:42:45.811520   37903 start.go:360] acquireMachinesLock for enable-default-cni-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:42:45.811555   37903 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "enable-default-cni-748000"
	I0513 17:42:45.811570   37903 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:enable-d
efault-cni-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:42:45.811611   37903 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:42:45.819057   37903 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:42:45.833864   37903 start.go:159] libmachine.API.Create for "enable-default-cni-748000" (driver="qemu2")
	I0513 17:42:45.833898   37903 client.go:168] LocalClient.Create starting
	I0513 17:42:45.833962   37903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:42:45.833992   37903 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:45.834006   37903 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:45.834044   37903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:42:45.834067   37903 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:45.834075   37903 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:45.834431   37903 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:42:45.979262   37903 main.go:141] libmachine: Creating SSH key...
	I0513 17:42:46.031949   37903 main.go:141] libmachine: Creating Disk image...
	I0513 17:42:46.031954   37903 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:42:46.032142   37903 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/disk.qcow2
	I0513 17:42:46.044721   37903 main.go:141] libmachine: STDOUT: 
	I0513 17:42:46.044739   37903 main.go:141] libmachine: STDERR: 
	I0513 17:42:46.044787   37903 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/disk.qcow2 +20000M
	I0513 17:42:46.055862   37903 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:42:46.055882   37903 main.go:141] libmachine: STDERR: 
	I0513 17:42:46.055900   37903 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/disk.qcow2
	I0513 17:42:46.055911   37903 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:42:46.055937   37903 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:7e:70:4d:9e:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/disk.qcow2
	I0513 17:42:46.057794   37903 main.go:141] libmachine: STDOUT: 
	I0513 17:42:46.057811   37903 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:42:46.057842   37903 client.go:171] duration metric: took 223.939834ms to LocalClient.Create
	I0513 17:42:48.059882   37903 start.go:128] duration metric: took 2.248308166s to createHost
	I0513 17:42:48.059899   37903 start.go:83] releasing machines lock for "enable-default-cni-748000", held for 2.24838375s
	W0513 17:42:48.059929   37903 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:48.068422   37903 out.go:177] * Deleting "enable-default-cni-748000" in qemu2 ...
	W0513 17:42:48.081143   37903 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:48.081150   37903 start.go:728] Will try again in 5 seconds ...
	I0513 17:42:53.083151   37903 start.go:360] acquireMachinesLock for enable-default-cni-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:42:53.083347   37903 start.go:364] duration metric: took 155.708µs to acquireMachinesLock for "enable-default-cni-748000"
	I0513 17:42:53.083365   37903 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:enable-d
efault-cni-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:42:53.083420   37903 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:42:53.091681   37903 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:42:53.108244   37903 start.go:159] libmachine.API.Create for "enable-default-cni-748000" (driver="qemu2")
	I0513 17:42:53.108288   37903 client.go:168] LocalClient.Create starting
	I0513 17:42:53.108350   37903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:42:53.108386   37903 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:53.108395   37903 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:53.108433   37903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:42:53.108456   37903 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:53.108461   37903 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:53.108756   37903 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:42:53.253832   37903 main.go:141] libmachine: Creating SSH key...
	I0513 17:42:53.318548   37903 main.go:141] libmachine: Creating Disk image...
	I0513 17:42:53.318555   37903 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:42:53.318769   37903 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/disk.qcow2
	I0513 17:42:53.331844   37903 main.go:141] libmachine: STDOUT: 
	I0513 17:42:53.331880   37903 main.go:141] libmachine: STDERR: 
	I0513 17:42:53.331947   37903 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/disk.qcow2 +20000M
	I0513 17:42:53.343110   37903 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:42:53.343136   37903 main.go:141] libmachine: STDERR: 
	I0513 17:42:53.343147   37903 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/disk.qcow2
	I0513 17:42:53.343153   37903 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:42:53.343185   37903 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:f1:cf:86:77:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/enable-default-cni-748000/disk.qcow2
	I0513 17:42:53.344919   37903 main.go:141] libmachine: STDOUT: 
	I0513 17:42:53.344944   37903 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:42:53.344956   37903 client.go:171] duration metric: took 236.669791ms to LocalClient.Create
	I0513 17:42:55.347106   37903 start.go:128] duration metric: took 2.263705958s to createHost
	I0513 17:42:55.347172   37903 start.go:83] releasing machines lock for "enable-default-cni-748000", held for 2.26386125s
	W0513 17:42:55.347576   37903 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:55.357042   37903 out.go:177] 
	W0513 17:42:55.363178   37903 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:42:55.363208   37903 out.go:239] * 
	* 
	W0513 17:42:55.365182   37903 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:42:55.374009   37903 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.728298791s)

                                                
                                                
-- stdout --
	* [flannel-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-748000" primary control-plane node in "flannel-748000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-748000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:42:57.605861   38013 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:42:57.605997   38013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:42:57.606003   38013 out.go:304] Setting ErrFile to fd 2...
	I0513 17:42:57.606006   38013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:42:57.606127   38013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:42:57.607254   38013 out.go:298] Setting JSON to false
	I0513 17:42:57.623778   38013 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27747,"bootTime":1715619630,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:42:57.623843   38013 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:42:57.630252   38013 out.go:177] * [flannel-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:42:57.638238   38013 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:42:57.639879   38013 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:42:57.638320   38013 notify.go:220] Checking for updates...
	I0513 17:42:57.645160   38013 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:42:57.648199   38013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:42:57.651227   38013 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:42:57.654150   38013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:42:57.657593   38013 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:42:57.657661   38013 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:42:57.657706   38013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:42:57.662143   38013 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:42:57.669202   38013 start.go:297] selected driver: qemu2
	I0513 17:42:57.669210   38013 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:42:57.669221   38013 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:42:57.671337   38013 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:42:57.674137   38013 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:42:57.677204   38013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:42:57.677221   38013 cni.go:84] Creating CNI manager for "flannel"
	I0513 17:42:57.677224   38013 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0513 17:42:57.677262   38013 start.go:340] cluster config:
	{Name:flannel-748000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVM
netPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:42:57.681716   38013 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:42:57.687175   38013 out.go:177] * Starting "flannel-748000" primary control-plane node in "flannel-748000" cluster
	I0513 17:42:57.691173   38013 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:42:57.691185   38013 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:42:57.691190   38013 cache.go:56] Caching tarball of preloaded images
	I0513 17:42:57.691246   38013 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:42:57.691250   38013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:42:57.691298   38013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/flannel-748000/config.json ...
	I0513 17:42:57.691308   38013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/flannel-748000/config.json: {Name:mk93fcaf653dace67f737638ee33797d60f6c4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:42:57.691688   38013 start.go:360] acquireMachinesLock for flannel-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:42:57.691718   38013 start.go:364] duration metric: took 25.084µs to acquireMachinesLock for "flannel-748000"
	I0513 17:42:57.691729   38013 start.go:93] Provisioning new machine with config: &{Name:flannel-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-748000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:42:57.691754   38013 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:42:57.696149   38013 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:42:57.710697   38013 start.go:159] libmachine.API.Create for "flannel-748000" (driver="qemu2")
	I0513 17:42:57.710723   38013 client.go:168] LocalClient.Create starting
	I0513 17:42:57.710785   38013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:42:57.710814   38013 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:57.710827   38013 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:57.710866   38013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:42:57.710889   38013 main.go:141] libmachine: Decoding PEM data...
	I0513 17:42:57.710896   38013 main.go:141] libmachine: Parsing certificate...
	I0513 17:42:57.711363   38013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:42:57.857338   38013 main.go:141] libmachine: Creating SSH key...
	I0513 17:42:57.902030   38013 main.go:141] libmachine: Creating Disk image...
	I0513 17:42:57.902036   38013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:42:57.902211   38013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/disk.qcow2
	I0513 17:42:57.915649   38013 main.go:141] libmachine: STDOUT: 
	I0513 17:42:57.915671   38013 main.go:141] libmachine: STDERR: 
	I0513 17:42:57.915738   38013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/disk.qcow2 +20000M
	I0513 17:42:57.928028   38013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:42:57.928048   38013 main.go:141] libmachine: STDERR: 
	I0513 17:42:57.928059   38013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/disk.qcow2
	I0513 17:42:57.928064   38013 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:42:57.928094   38013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:11:d1:1f:dd:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/disk.qcow2
	I0513 17:42:57.930017   38013 main.go:141] libmachine: STDOUT: 
	I0513 17:42:57.930032   38013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:42:57.930051   38013 client.go:171] duration metric: took 219.327791ms to LocalClient.Create
	I0513 17:42:59.932206   38013 start.go:128] duration metric: took 2.240466708s to createHost
	I0513 17:42:59.932273   38013 start.go:83] releasing machines lock for "flannel-748000", held for 2.240591584s
	W0513 17:42:59.932385   38013 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:59.943642   38013 out.go:177] * Deleting "flannel-748000" in qemu2 ...
	W0513 17:42:59.971077   38013 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:42:59.971108   38013 start.go:728] Will try again in 5 seconds ...
	I0513 17:43:04.973214   38013 start.go:360] acquireMachinesLock for flannel-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:04.973590   38013 start.go:364] duration metric: took 293.541µs to acquireMachinesLock for "flannel-748000"
	I0513 17:43:04.973654   38013 start.go:93] Provisioning new machine with config: &{Name:flannel-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-748000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:43:04.973869   38013 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:43:04.981336   38013 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:43:05.018696   38013 start.go:159] libmachine.API.Create for "flannel-748000" (driver="qemu2")
	I0513 17:43:05.018756   38013 client.go:168] LocalClient.Create starting
	I0513 17:43:05.018868   38013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:43:05.018935   38013 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:05.018952   38013 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:05.019027   38013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:43:05.019067   38013 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:05.019077   38013 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:05.019551   38013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:43:05.170174   38013 main.go:141] libmachine: Creating SSH key...
	I0513 17:43:05.243021   38013 main.go:141] libmachine: Creating Disk image...
	I0513 17:43:05.243029   38013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:43:05.243223   38013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/disk.qcow2
	I0513 17:43:05.255796   38013 main.go:141] libmachine: STDOUT: 
	I0513 17:43:05.255817   38013 main.go:141] libmachine: STDERR: 
	I0513 17:43:05.255879   38013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/disk.qcow2 +20000M
	I0513 17:43:05.266747   38013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:43:05.266765   38013 main.go:141] libmachine: STDERR: 
	I0513 17:43:05.266775   38013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/disk.qcow2
	I0513 17:43:05.266779   38013 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:43:05.266818   38013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:91:2a:7e:4f:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/flannel-748000/disk.qcow2
	I0513 17:43:05.268503   38013 main.go:141] libmachine: STDOUT: 
	I0513 17:43:05.268521   38013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:05.268533   38013 client.go:171] duration metric: took 249.777958ms to LocalClient.Create
	I0513 17:43:07.270589   38013 start.go:128] duration metric: took 2.296739083s to createHost
	I0513 17:43:07.270616   38013 start.go:83] releasing machines lock for "flannel-748000", held for 2.297057333s
	W0513 17:43:07.270779   38013 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:07.281941   38013 out.go:177] 
	W0513 17:43:07.286087   38013 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:43:07.286096   38013 out.go:239] * 
	* 
	W0513 17:43:07.286823   38013 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:43:07.295032   38013 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.795794083s)

                                                
                                                
-- stdout --
	* [bridge-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-748000" primary control-plane node in "bridge-748000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-748000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:43:09.647350   38139 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:43:09.647455   38139 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:09.647459   38139 out.go:304] Setting ErrFile to fd 2...
	I0513 17:43:09.647464   38139 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:09.647585   38139 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:43:09.648689   38139 out.go:298] Setting JSON to false
	I0513 17:43:09.665194   38139 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27759,"bootTime":1715619630,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:43:09.665343   38139 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:43:09.671088   38139 out.go:177] * [bridge-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:43:09.681069   38139 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:43:09.686022   38139 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:43:09.681112   38139 notify.go:220] Checking for updates...
	I0513 17:43:09.691993   38139 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:43:09.695026   38139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:43:09.696536   38139 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:43:09.700067   38139 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:43:09.703410   38139 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:43:09.703478   38139 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:43:09.703521   38139 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:43:09.707862   38139 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:43:09.715009   38139 start.go:297] selected driver: qemu2
	I0513 17:43:09.715020   38139 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:43:09.715028   38139 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:43:09.717315   38139 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:43:09.720094   38139 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:43:09.723096   38139 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:43:09.723121   38139 cni.go:84] Creating CNI manager for "bridge"
	I0513 17:43:09.723125   38139 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:43:09.723164   38139 start.go:340] cluster config:
	{Name:bridge-748000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnet
Path:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:43:09.727463   38139 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:09.733955   38139 out.go:177] * Starting "bridge-748000" primary control-plane node in "bridge-748000" cluster
	I0513 17:43:09.737985   38139 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:43:09.737999   38139 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:43:09.738007   38139 cache.go:56] Caching tarball of preloaded images
	I0513 17:43:09.738061   38139 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:43:09.738067   38139 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:43:09.738128   38139 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/bridge-748000/config.json ...
	I0513 17:43:09.738140   38139 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/bridge-748000/config.json: {Name:mkd6c5562a74d495af99e6c02fd6fefeaf9e869d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:43:09.738360   38139 start.go:360] acquireMachinesLock for bridge-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:09.738392   38139 start.go:364] duration metric: took 26.709µs to acquireMachinesLock for "bridge-748000"
	I0513 17:43:09.738405   38139 start.go:93] Provisioning new machine with config: &{Name:bridge-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-748000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:43:09.738432   38139 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:43:09.746029   38139 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:43:09.761376   38139 start.go:159] libmachine.API.Create for "bridge-748000" (driver="qemu2")
	I0513 17:43:09.761402   38139 client.go:168] LocalClient.Create starting
	I0513 17:43:09.761459   38139 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:43:09.761490   38139 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:09.761503   38139 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:09.761537   38139 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:43:09.761559   38139 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:09.761567   38139 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:09.761966   38139 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:43:09.906811   38139 main.go:141] libmachine: Creating SSH key...
	I0513 17:43:10.000757   38139 main.go:141] libmachine: Creating Disk image...
	I0513 17:43:10.000763   38139 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:43:10.000975   38139 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/disk.qcow2
	I0513 17:43:10.013471   38139 main.go:141] libmachine: STDOUT: 
	I0513 17:43:10.013504   38139 main.go:141] libmachine: STDERR: 
	I0513 17:43:10.013556   38139 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/disk.qcow2 +20000M
	I0513 17:43:10.024849   38139 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:43:10.024868   38139 main.go:141] libmachine: STDERR: 
	I0513 17:43:10.024887   38139 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/disk.qcow2
	I0513 17:43:10.024891   38139 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:43:10.024921   38139 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:85:50:85:53:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/disk.qcow2
	I0513 17:43:10.026628   38139 main.go:141] libmachine: STDOUT: 
	I0513 17:43:10.026645   38139 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:10.026662   38139 client.go:171] duration metric: took 265.260375ms to LocalClient.Create
	I0513 17:43:12.028837   38139 start.go:128] duration metric: took 2.290420958s to createHost
	I0513 17:43:12.028914   38139 start.go:83] releasing machines lock for "bridge-748000", held for 2.290558s
	W0513 17:43:12.029049   38139 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:12.045424   38139 out.go:177] * Deleting "bridge-748000" in qemu2 ...
	W0513 17:43:12.067129   38139 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:12.067157   38139 start.go:728] Will try again in 5 seconds ...
	I0513 17:43:17.067674   38139 start.go:360] acquireMachinesLock for bridge-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:17.068222   38139 start.go:364] duration metric: took 437.708µs to acquireMachinesLock for "bridge-748000"
	I0513 17:43:17.068386   38139 start.go:93] Provisioning new machine with config: &{Name:bridge-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-748000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:43:17.068621   38139 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:43:17.078435   38139 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:43:17.122897   38139 start.go:159] libmachine.API.Create for "bridge-748000" (driver="qemu2")
	I0513 17:43:17.122944   38139 client.go:168] LocalClient.Create starting
	I0513 17:43:17.123052   38139 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:43:17.123128   38139 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:17.123142   38139 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:17.123207   38139 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:43:17.123248   38139 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:17.123262   38139 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:17.123737   38139 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:43:17.274691   38139 main.go:141] libmachine: Creating SSH key...
	I0513 17:43:17.347548   38139 main.go:141] libmachine: Creating Disk image...
	I0513 17:43:17.347554   38139 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:43:17.347757   38139 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/disk.qcow2
	I0513 17:43:17.360229   38139 main.go:141] libmachine: STDOUT: 
	I0513 17:43:17.360249   38139 main.go:141] libmachine: STDERR: 
	I0513 17:43:17.360308   38139 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/disk.qcow2 +20000M
	I0513 17:43:17.371060   38139 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:43:17.371078   38139 main.go:141] libmachine: STDERR: 
	I0513 17:43:17.371092   38139 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/disk.qcow2
	I0513 17:43:17.371097   38139 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:43:17.371141   38139 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:64:d9:f6:57:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/bridge-748000/disk.qcow2
	I0513 17:43:17.372813   38139 main.go:141] libmachine: STDOUT: 
	I0513 17:43:17.372828   38139 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:17.372848   38139 client.go:171] duration metric: took 249.897709ms to LocalClient.Create
	I0513 17:43:19.375002   38139 start.go:128] duration metric: took 2.306369375s to createHost
	I0513 17:43:19.375067   38139 start.go:83] releasing machines lock for "bridge-748000", held for 2.306867417s
	W0513 17:43:19.375405   38139 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:19.383829   38139 out.go:177] 
	W0513 17:43:19.389919   38139 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:43:19.389942   38139 out.go:239] * 
	* 
	W0513 17:43:19.391927   38139 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:43:19.400786   38139 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-748000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.993674209s)

                                                
                                                
-- stdout --
	* [kubenet-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-748000" primary control-plane node in "kubenet-748000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-748000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:43:21.708652   38253 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:43:21.708782   38253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:21.708785   38253 out.go:304] Setting ErrFile to fd 2...
	I0513 17:43:21.708787   38253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:21.708918   38253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:43:21.709920   38253 out.go:298] Setting JSON to false
	I0513 17:43:21.726114   38253 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27771,"bootTime":1715619630,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:43:21.726184   38253 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:43:21.732928   38253 out.go:177] * [kubenet-748000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:43:21.741079   38253 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:43:21.741140   38253 notify.go:220] Checking for updates...
	I0513 17:43:21.745102   38253 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:43:21.748054   38253 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:43:21.751083   38253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:43:21.754024   38253 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:43:21.757050   38253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:43:21.760363   38253 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:43:21.760431   38253 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:43:21.760484   38253 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:43:21.764026   38253 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:43:21.771008   38253 start.go:297] selected driver: qemu2
	I0513 17:43:21.771018   38253 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:43:21.771026   38253 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:43:21.773289   38253 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:43:21.776014   38253 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:43:21.779151   38253 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:43:21.779169   38253 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0513 17:43:21.779217   38253 start.go:340] cluster config:
	{Name:kubenet-748000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubenet-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnet
Path:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:43:21.783408   38253 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:21.789957   38253 out.go:177] * Starting "kubenet-748000" primary control-plane node in "kubenet-748000" cluster
	I0513 17:43:21.794165   38253 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:43:21.794182   38253 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:43:21.794196   38253 cache.go:56] Caching tarball of preloaded images
	I0513 17:43:21.794262   38253 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:43:21.794267   38253 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:43:21.794324   38253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/kubenet-748000/config.json ...
	I0513 17:43:21.794335   38253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/kubenet-748000/config.json: {Name:mk6e8baa7e1e401ed33624c94dbb3363b26cc3e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:43:21.794552   38253 start.go:360] acquireMachinesLock for kubenet-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:21.794583   38253 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "kubenet-748000"
	I0513 17:43:21.794596   38253 start.go:93] Provisioning new machine with config: &{Name:kubenet-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubenet-748000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:43:21.794630   38253 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:43:21.803043   38253 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:43:21.819615   38253 start.go:159] libmachine.API.Create for "kubenet-748000" (driver="qemu2")
	I0513 17:43:21.819640   38253 client.go:168] LocalClient.Create starting
	I0513 17:43:21.819709   38253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:43:21.819737   38253 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:21.819752   38253 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:21.819791   38253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:43:21.819813   38253 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:21.819822   38253 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:21.820166   38253 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:43:21.964601   38253 main.go:141] libmachine: Creating SSH key...
	I0513 17:43:22.135793   38253 main.go:141] libmachine: Creating Disk image...
	I0513 17:43:22.135802   38253 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:43:22.136009   38253 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/disk.qcow2
	I0513 17:43:22.148843   38253 main.go:141] libmachine: STDOUT: 
	I0513 17:43:22.148859   38253 main.go:141] libmachine: STDERR: 
	I0513 17:43:22.148923   38253 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/disk.qcow2 +20000M
	I0513 17:43:22.159799   38253 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:43:22.159814   38253 main.go:141] libmachine: STDERR: 
	I0513 17:43:22.159832   38253 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/disk.qcow2
	I0513 17:43:22.159836   38253 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:43:22.159874   38253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:8a:27:c8:2c:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/disk.qcow2
	I0513 17:43:22.161593   38253 main.go:141] libmachine: STDOUT: 
	I0513 17:43:22.161607   38253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:22.161627   38253 client.go:171] duration metric: took 341.988583ms to LocalClient.Create
	I0513 17:43:24.163966   38253 start.go:128] duration metric: took 2.3692945s to createHost
	I0513 17:43:24.164082   38253 start.go:83] releasing machines lock for "kubenet-748000", held for 2.369535291s
	W0513 17:43:24.164151   38253 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:24.180682   38253 out.go:177] * Deleting "kubenet-748000" in qemu2 ...
	W0513 17:43:24.205638   38253 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:24.205678   38253 start.go:728] Will try again in 5 seconds ...
	I0513 17:43:29.207149   38253 start.go:360] acquireMachinesLock for kubenet-748000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:29.207427   38253 start.go:364] duration metric: took 212.709µs to acquireMachinesLock for "kubenet-748000"
	I0513 17:43:29.207498   38253 start.go:93] Provisioning new machine with config: &{Name:kubenet-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubenet-748000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:43:29.207623   38253 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:43:29.215857   38253 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0513 17:43:29.251129   38253 start.go:159] libmachine.API.Create for "kubenet-748000" (driver="qemu2")
	I0513 17:43:29.251200   38253 client.go:168] LocalClient.Create starting
	I0513 17:43:29.251299   38253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:43:29.251357   38253 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:29.251378   38253 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:29.251435   38253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:43:29.251473   38253 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:29.251485   38253 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:29.251965   38253 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:43:29.402108   38253 main.go:141] libmachine: Creating SSH key...
	I0513 17:43:29.608509   38253 main.go:141] libmachine: Creating Disk image...
	I0513 17:43:29.608522   38253 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:43:29.608755   38253 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/disk.qcow2
	I0513 17:43:29.621544   38253 main.go:141] libmachine: STDOUT: 
	I0513 17:43:29.621569   38253 main.go:141] libmachine: STDERR: 
	I0513 17:43:29.621628   38253 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/disk.qcow2 +20000M
	I0513 17:43:29.632715   38253 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:43:29.632731   38253 main.go:141] libmachine: STDERR: 
	I0513 17:43:29.632742   38253 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/disk.qcow2
	I0513 17:43:29.632749   38253 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:43:29.632784   38253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:1d:7d:6b:e6:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/kubenet-748000/disk.qcow2
	I0513 17:43:29.634490   38253 main.go:141] libmachine: STDOUT: 
	I0513 17:43:29.634504   38253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:29.634516   38253 client.go:171] duration metric: took 383.318542ms to LocalClient.Create
	I0513 17:43:31.636811   38253 start.go:128] duration metric: took 2.429176875s to createHost
	I0513 17:43:31.636928   38253 start.go:83] releasing machines lock for "kubenet-748000", held for 2.429534292s
	W0513 17:43:31.637303   38253 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:31.646826   38253 out.go:177] 
	W0513 17:43:31.650885   38253 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:43:31.650928   38253 out.go:239] * 
	* 
	W0513 17:43:31.652816   38253 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:43:31.660656   38253 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-271000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-271000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.917241583s)

                                                
                                                
-- stdout --
	* [old-k8s-version-271000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-271000" primary control-plane node in "old-k8s-version-271000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-271000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:43:33.863058   38363 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:43:33.863220   38363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:33.863223   38363 out.go:304] Setting ErrFile to fd 2...
	I0513 17:43:33.863225   38363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:33.863352   38363 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:43:33.864523   38363 out.go:298] Setting JSON to false
	I0513 17:43:33.881032   38363 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27783,"bootTime":1715619630,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:43:33.881102   38363 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:43:33.887384   38363 out.go:177] * [old-k8s-version-271000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:43:33.894296   38363 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:43:33.898328   38363 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:43:33.894374   38363 notify.go:220] Checking for updates...
	I0513 17:43:33.904225   38363 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:43:33.907323   38363 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:43:33.910232   38363 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:43:33.913250   38363 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:43:33.916671   38363 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:43:33.916743   38363 config.go:182] Loaded profile config "stopped-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0513 17:43:33.916790   38363 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:43:33.920244   38363 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:43:33.927264   38363 start.go:297] selected driver: qemu2
	I0513 17:43:33.927273   38363 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:43:33.927280   38363 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:43:33.929732   38363 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:43:33.931410   38363 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:43:33.934346   38363 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:43:33.934365   38363 cni.go:84] Creating CNI manager for ""
	I0513 17:43:33.934371   38363 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0513 17:43:33.934409   38363 start.go:340] cluster config:
	{Name:old-k8s-version-271000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Sock
etVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:43:33.938796   38363 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:33.946266   38363 out.go:177] * Starting "old-k8s-version-271000" primary control-plane node in "old-k8s-version-271000" cluster
	I0513 17:43:33.950237   38363 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 17:43:33.950251   38363 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0513 17:43:33.950270   38363 cache.go:56] Caching tarball of preloaded images
	I0513 17:43:33.950319   38363 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:43:33.950323   38363 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0513 17:43:33.950385   38363 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/old-k8s-version-271000/config.json ...
	I0513 17:43:33.950395   38363 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/old-k8s-version-271000/config.json: {Name:mke7a5982742361d82422f33b467c9c97d86f471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:43:33.950594   38363 start.go:360] acquireMachinesLock for old-k8s-version-271000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:33.950625   38363 start.go:364] duration metric: took 24.416µs to acquireMachinesLock for "old-k8s-version-271000"
	I0513 17:43:33.950638   38363 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-vers
ion-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:43:33.950669   38363 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:43:33.959335   38363 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:43:33.974989   38363 start.go:159] libmachine.API.Create for "old-k8s-version-271000" (driver="qemu2")
	I0513 17:43:33.975011   38363 client.go:168] LocalClient.Create starting
	I0513 17:43:33.975084   38363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:43:33.975112   38363 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:33.975126   38363 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:33.975164   38363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:43:33.975186   38363 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:33.975193   38363 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:33.975528   38363 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:43:34.122273   38363 main.go:141] libmachine: Creating SSH key...
	I0513 17:43:34.272777   38363 main.go:141] libmachine: Creating Disk image...
	I0513 17:43:34.272784   38363 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:43:34.273018   38363 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2
	I0513 17:43:34.286301   38363 main.go:141] libmachine: STDOUT: 
	I0513 17:43:34.286331   38363 main.go:141] libmachine: STDERR: 
	I0513 17:43:34.286404   38363 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2 +20000M
	I0513 17:43:34.298209   38363 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:43:34.298232   38363 main.go:141] libmachine: STDERR: 
	I0513 17:43:34.298246   38363 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2
	I0513 17:43:34.298250   38363 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:43:34.298288   38363 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:6f:99:1a:7c:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2
	I0513 17:43:34.300122   38363 main.go:141] libmachine: STDOUT: 
	I0513 17:43:34.300136   38363 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:34.300156   38363 client.go:171] duration metric: took 325.146417ms to LocalClient.Create
	I0513 17:43:36.302134   38363 start.go:128] duration metric: took 2.351504958s to createHost
	I0513 17:43:36.302162   38363 start.go:83] releasing machines lock for "old-k8s-version-271000", held for 2.351578709s
	W0513 17:43:36.302191   38363 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:36.311001   38363 out.go:177] * Deleting "old-k8s-version-271000" in qemu2 ...
	W0513 17:43:36.321986   38363 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:36.321996   38363 start.go:728] Will try again in 5 seconds ...
	I0513 17:43:41.324193   38363 start.go:360] acquireMachinesLock for old-k8s-version-271000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:41.324652   38363 start.go:364] duration metric: took 366.208µs to acquireMachinesLock for "old-k8s-version-271000"
	I0513 17:43:41.324796   38363 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-vers
ion-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:43:41.325057   38363 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:43:41.339815   38363 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:43:41.387953   38363 start.go:159] libmachine.API.Create for "old-k8s-version-271000" (driver="qemu2")
	I0513 17:43:41.388003   38363 client.go:168] LocalClient.Create starting
	I0513 17:43:41.388122   38363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:43:41.388183   38363 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:41.388200   38363 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:41.388256   38363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:43:41.388300   38363 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:41.388313   38363 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:41.388821   38363 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:43:41.557819   38363 main.go:141] libmachine: Creating SSH key...
	I0513 17:43:41.683781   38363 main.go:141] libmachine: Creating Disk image...
	I0513 17:43:41.683787   38363 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:43:41.683977   38363 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2
	I0513 17:43:41.696608   38363 main.go:141] libmachine: STDOUT: 
	I0513 17:43:41.696624   38363 main.go:141] libmachine: STDERR: 
	I0513 17:43:41.696685   38363 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2 +20000M
	I0513 17:43:41.707692   38363 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:43:41.707711   38363 main.go:141] libmachine: STDERR: 
	I0513 17:43:41.707720   38363 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2
	I0513 17:43:41.707725   38363 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:43:41.707751   38363 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:91:45:61:f7:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2
	I0513 17:43:41.709460   38363 main.go:141] libmachine: STDOUT: 
	I0513 17:43:41.709477   38363 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:41.709488   38363 client.go:171] duration metric: took 321.48575ms to LocalClient.Create
	I0513 17:43:43.711699   38363 start.go:128] duration metric: took 2.386630167s to createHost
	I0513 17:43:43.711770   38363 start.go:83] releasing machines lock for "old-k8s-version-271000", held for 2.38714425s
	W0513 17:43:43.712040   38363 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:43.723545   38363 out.go:177] 
	W0513 17:43:43.727803   38363 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:43:43.727827   38363 out.go:239] * 
	* 
	W0513 17:43:43.730312   38363 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:43:43.740502   38363 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-271000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000: exit status 7 (45.37525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-026000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-026000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.807594417s)

                                                
                                                
-- stdout --
	* [embed-certs-026000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-026000" primary control-plane node in "embed-certs-026000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-026000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:43:36.462196   38377 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:43:36.462336   38377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:36.462339   38377 out.go:304] Setting ErrFile to fd 2...
	I0513 17:43:36.462342   38377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:36.462469   38377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:43:36.463521   38377 out.go:298] Setting JSON to false
	I0513 17:43:36.479541   38377 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27786,"bootTime":1715619630,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:43:36.479643   38377 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:43:36.483993   38377 out.go:177] * [embed-certs-026000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:43:36.490987   38377 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:43:36.494985   38377 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:43:36.491024   38377 notify.go:220] Checking for updates...
	I0513 17:43:36.500938   38377 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:43:36.504026   38377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:43:36.506932   38377 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:43:36.509953   38377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:43:36.513339   38377 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:43:36.513408   38377 config.go:182] Loaded profile config "old-k8s-version-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0513 17:43:36.513461   38377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:43:36.516923   38377 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:43:36.524005   38377 start.go:297] selected driver: qemu2
	I0513 17:43:36.524017   38377 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:43:36.524026   38377 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:43:36.526214   38377 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:43:36.527886   38377 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:43:36.530984   38377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:43:36.531001   38377 cni.go:84] Creating CNI manager for ""
	I0513 17:43:36.531009   38377 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:43:36.531016   38377 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:43:36.531050   38377 start.go:340] cluster config:
	{Name:embed-certs-026000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-026000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMn
etPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:43:36.535567   38377 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:36.542944   38377 out.go:177] * Starting "embed-certs-026000" primary control-plane node in "embed-certs-026000" cluster
	I0513 17:43:36.546962   38377 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:43:36.546978   38377 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:43:36.546991   38377 cache.go:56] Caching tarball of preloaded images
	I0513 17:43:36.547049   38377 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:43:36.547057   38377 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:43:36.547124   38377 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/embed-certs-026000/config.json ...
	I0513 17:43:36.547137   38377 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/embed-certs-026000/config.json: {Name:mk870a510aceeae1c5524bb71388c934cf0c4deb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:43:36.547355   38377 start.go:360] acquireMachinesLock for embed-certs-026000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:36.547392   38377 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "embed-certs-026000"
	I0513 17:43:36.547406   38377 start.go:93] Provisioning new machine with config: &{Name:embed-certs-026000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-0260
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:43:36.547436   38377 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:43:36.555956   38377 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:43:36.573749   38377 start.go:159] libmachine.API.Create for "embed-certs-026000" (driver="qemu2")
	I0513 17:43:36.573779   38377 client.go:168] LocalClient.Create starting
	I0513 17:43:36.573846   38377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:43:36.573877   38377 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:36.573890   38377 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:36.573928   38377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:43:36.573951   38377 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:36.573957   38377 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:36.574323   38377 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:43:36.719402   38377 main.go:141] libmachine: Creating SSH key...
	I0513 17:43:36.814871   38377 main.go:141] libmachine: Creating Disk image...
	I0513 17:43:36.814877   38377 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:43:36.815072   38377 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2
	I0513 17:43:36.827420   38377 main.go:141] libmachine: STDOUT: 
	I0513 17:43:36.827443   38377 main.go:141] libmachine: STDERR: 
	I0513 17:43:36.827498   38377 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2 +20000M
	I0513 17:43:36.838516   38377 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:43:36.838535   38377 main.go:141] libmachine: STDERR: 
	I0513 17:43:36.838571   38377 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2
	I0513 17:43:36.838576   38377 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:43:36.838608   38377 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:33:07:3a:0f:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2
	I0513 17:43:36.840293   38377 main.go:141] libmachine: STDOUT: 
	I0513 17:43:36.840308   38377 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:36.840327   38377 client.go:171] duration metric: took 266.5495ms to LocalClient.Create
	I0513 17:43:38.842456   38377 start.go:128] duration metric: took 2.295040083s to createHost
	I0513 17:43:38.842518   38377 start.go:83] releasing machines lock for "embed-certs-026000", held for 2.295162458s
	W0513 17:43:38.842579   38377 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:38.858950   38377 out.go:177] * Deleting "embed-certs-026000" in qemu2 ...
	W0513 17:43:38.885839   38377 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:38.885862   38377 start.go:728] Will try again in 5 seconds ...
	I0513 17:43:43.886330   38377 start.go:360] acquireMachinesLock for embed-certs-026000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:43.886397   38377 start.go:364] duration metric: took 50.208µs to acquireMachinesLock for "embed-certs-026000"
	I0513 17:43:43.886418   38377 start.go:93] Provisioning new machine with config: &{Name:embed-certs-026000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-0260
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:43:43.886468   38377 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:43:43.896856   38377 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:43:43.912739   38377 start.go:159] libmachine.API.Create for "embed-certs-026000" (driver="qemu2")
	I0513 17:43:43.912773   38377 client.go:168] LocalClient.Create starting
	I0513 17:43:43.912848   38377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:43:43.912890   38377 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:43.912899   38377 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:43.912933   38377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:43:43.912955   38377 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:43.912963   38377 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:43.913254   38377 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:43:44.102943   38377 main.go:141] libmachine: Creating SSH key...
	I0513 17:43:44.182732   38377 main.go:141] libmachine: Creating Disk image...
	I0513 17:43:44.182739   38377 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:43:44.182938   38377 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2
	I0513 17:43:44.195232   38377 main.go:141] libmachine: STDOUT: 
	I0513 17:43:44.195252   38377 main.go:141] libmachine: STDERR: 
	I0513 17:43:44.195321   38377 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2 +20000M
	I0513 17:43:44.206515   38377 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:43:44.206544   38377 main.go:141] libmachine: STDERR: 
	I0513 17:43:44.206557   38377 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2
	I0513 17:43:44.206562   38377 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:43:44.206599   38377 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:62:a0:24:a4:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2
	I0513 17:43:44.208289   38377 main.go:141] libmachine: STDOUT: 
	I0513 17:43:44.208305   38377 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:44.208318   38377 client.go:171] duration metric: took 295.548ms to LocalClient.Create
	I0513 17:43:46.208618   38377 start.go:128] duration metric: took 2.322175375s to createHost
	I0513 17:43:46.208643   38377 start.go:83] releasing machines lock for "embed-certs-026000", held for 2.322285458s
	W0513 17:43:46.208777   38377 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-026000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-026000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:46.217110   38377 out.go:177] 
	W0513 17:43:46.221147   38377 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:43:46.221173   38377 out.go:239] * 
	* 
	W0513 17:43:46.222021   38377 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:43:46.232104   38377 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-026000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (34.853583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-271000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-271000 create -f testdata/busybox.yaml: exit status 1 (28.645292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-271000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-271000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000: exit status 7 (30.432541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-271000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000: exit status 7 (37.016458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-271000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-271000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-271000 describe deploy/metrics-server -n kube-system: exit status 1 (29.336292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-271000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-271000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000: exit status 7 (32.463792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-026000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-026000 create -f testdata/busybox.yaml: exit status 1 (26.7705ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-026000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-026000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (30.44825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (33.961416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-271000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-271000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.199965958s)

                                                
                                                
-- stdout --
	* [old-k8s-version-271000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-271000" primary control-plane node in "old-k8s-version-271000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-271000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-271000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:43:46.339125   38433 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:43:46.339265   38433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:46.339271   38433 out.go:304] Setting ErrFile to fd 2...
	I0513 17:43:46.339273   38433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:46.339392   38433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:43:46.340512   38433 out.go:298] Setting JSON to false
	I0513 17:43:46.358335   38433 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27796,"bootTime":1715619630,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:43:46.358415   38433 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:43:46.363072   38433 out.go:177] * [old-k8s-version-271000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:43:46.374102   38433 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:43:46.370145   38433 notify.go:220] Checking for updates...
	I0513 17:43:46.381029   38433 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:43:46.384132   38433 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:43:46.387149   38433 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:43:46.388414   38433 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:43:46.391067   38433 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:43:46.394401   38433 config.go:182] Loaded profile config "old-k8s-version-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0513 17:43:46.397140   38433 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0513 17:43:46.401148   38433 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:43:46.408101   38433 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:43:46.415142   38433 start.go:297] selected driver: qemu2
	I0513 17:43:46.415152   38433 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version
-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/U
sers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:43:46.415243   38433 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:43:46.417431   38433 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:43:46.417453   38433 cni.go:84] Creating CNI manager for ""
	I0513 17:43:46.417460   38433 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0513 17:43:46.417487   38433 start.go:340] cluster config:
	{Name:old-k8s-version-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:43:46.421510   38433 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:46.433164   38433 out.go:177] * Starting "old-k8s-version-271000" primary control-plane node in "old-k8s-version-271000" cluster
	I0513 17:43:46.436091   38433 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 17:43:46.436106   38433 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0513 17:43:46.436115   38433 cache.go:56] Caching tarball of preloaded images
	I0513 17:43:46.436173   38433 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:43:46.436178   38433 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0513 17:43:46.436251   38433 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/old-k8s-version-271000/config.json ...
	I0513 17:43:46.436582   38433 start.go:360] acquireMachinesLock for old-k8s-version-271000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:46.436613   38433 start.go:364] duration metric: took 25.459µs to acquireMachinesLock for "old-k8s-version-271000"
	I0513 17:43:46.436622   38433 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:43:46.436627   38433 fix.go:54] fixHost starting: 
	I0513 17:43:46.436732   38433 fix.go:112] recreateIfNeeded on old-k8s-version-271000: state=Stopped err=<nil>
	W0513 17:43:46.436740   38433 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:43:46.445038   38433 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-271000" ...
	I0513 17:43:46.448122   38433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:91:45:61:f7:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2
	I0513 17:43:46.450113   38433 main.go:141] libmachine: STDOUT: 
	I0513 17:43:46.450134   38433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:46.450159   38433 fix.go:56] duration metric: took 13.533125ms for fixHost
	I0513 17:43:46.450161   38433 start.go:83] releasing machines lock for "old-k8s-version-271000", held for 13.544542ms
	W0513 17:43:46.450170   38433 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:43:46.450197   38433 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:46.450201   38433 start.go:728] Will try again in 5 seconds ...
	I0513 17:43:51.452335   38433 start.go:360] acquireMachinesLock for old-k8s-version-271000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:51.452714   38433 start.go:364] duration metric: took 283.708µs to acquireMachinesLock for "old-k8s-version-271000"
	I0513 17:43:51.452835   38433 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:43:51.452853   38433 fix.go:54] fixHost starting: 
	I0513 17:43:51.453646   38433 fix.go:112] recreateIfNeeded on old-k8s-version-271000: state=Stopped err=<nil>
	W0513 17:43:51.453690   38433 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:43:51.461084   38433 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-271000" ...
	I0513 17:43:51.465291   38433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:91:45:61:f7:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/old-k8s-version-271000/disk.qcow2
	I0513 17:43:51.474106   38433 main.go:141] libmachine: STDOUT: 
	I0513 17:43:51.474159   38433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:51.474238   38433 fix.go:56] duration metric: took 21.385ms for fixHost
	I0513 17:43:51.474254   38433 start.go:83] releasing machines lock for "old-k8s-version-271000", held for 21.521541ms
	W0513 17:43:51.474444   38433 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-271000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-271000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:51.482079   38433 out.go:177] 
	W0513 17:43:51.486120   38433 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:43:51.486143   38433 out.go:239] * 
	* 
	W0513 17:43:51.488930   38433 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:43:51.497029   38433 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-271000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000: exit status 7 (65.572459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-026000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-026000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-026000 describe deploy/metrics-server -n kube-system: exit status 1 (26.429ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-026000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-026000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (28.276459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-026000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-026000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.250606125s)

                                                
                                                
-- stdout --
	* [embed-certs-026000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-026000" primary control-plane node in "embed-certs-026000" cluster
	* Restarting existing qemu2 VM for "embed-certs-026000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-026000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:43:49.783809   38470 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:43:49.783931   38470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:49.783935   38470 out.go:304] Setting ErrFile to fd 2...
	I0513 17:43:49.783937   38470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:49.784054   38470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:43:49.785071   38470 out.go:298] Setting JSON to false
	I0513 17:43:49.801098   38470 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27799,"bootTime":1715619630,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:43:49.801160   38470 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:43:49.806536   38470 out.go:177] * [embed-certs-026000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:43:49.813560   38470 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:43:49.813618   38470 notify.go:220] Checking for updates...
	I0513 17:43:49.817536   38470 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:43:49.821570   38470 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:43:49.824455   38470 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:43:49.827517   38470 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:43:49.830539   38470 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:43:49.833721   38470 config.go:182] Loaded profile config "embed-certs-026000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:43:49.833984   38470 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:43:49.838504   38470 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:43:49.845471   38470 start.go:297] selected driver: qemu2
	I0513 17:43:49.845483   38470 start.go:901] validating driver "qemu2" against &{Name:embed-certs-026000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-026000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:43:49.845589   38470 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:43:49.847766   38470 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:43:49.847796   38470 cni.go:84] Creating CNI manager for ""
	I0513 17:43:49.847804   38470 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:43:49.847832   38470 start.go:340] cluster config:
	{Name:embed-certs-026000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-026000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:43:49.852133   38470 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:49.859464   38470 out.go:177] * Starting "embed-certs-026000" primary control-plane node in "embed-certs-026000" cluster
	I0513 17:43:49.863513   38470 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:43:49.863529   38470 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:43:49.863542   38470 cache.go:56] Caching tarball of preloaded images
	I0513 17:43:49.863605   38470 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:43:49.863611   38470 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:43:49.863678   38470 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/embed-certs-026000/config.json ...
	I0513 17:43:49.864212   38470 start.go:360] acquireMachinesLock for embed-certs-026000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:49.864239   38470 start.go:364] duration metric: took 21.708µs to acquireMachinesLock for "embed-certs-026000"
	I0513 17:43:49.864249   38470 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:43:49.864253   38470 fix.go:54] fixHost starting: 
	I0513 17:43:49.864363   38470 fix.go:112] recreateIfNeeded on embed-certs-026000: state=Stopped err=<nil>
	W0513 17:43:49.864371   38470 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:43:49.872511   38470 out.go:177] * Restarting existing qemu2 VM for "embed-certs-026000" ...
	I0513 17:43:49.876559   38470 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:62:a0:24:a4:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2
	I0513 17:43:49.878523   38470 main.go:141] libmachine: STDOUT: 
	I0513 17:43:49.878546   38470 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:49.878573   38470 fix.go:56] duration metric: took 14.319208ms for fixHost
	I0513 17:43:49.878576   38470 start.go:83] releasing machines lock for "embed-certs-026000", held for 14.3335ms
	W0513 17:43:49.878583   38470 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:43:49.878644   38470 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:49.878649   38470 start.go:728] Will try again in 5 seconds ...
	I0513 17:43:54.880786   38470 start.go:360] acquireMachinesLock for embed-certs-026000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:54.923118   38470 start.go:364] duration metric: took 42.192417ms to acquireMachinesLock for "embed-certs-026000"
	I0513 17:43:54.923279   38470 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:43:54.923299   38470 fix.go:54] fixHost starting: 
	I0513 17:43:54.923969   38470 fix.go:112] recreateIfNeeded on embed-certs-026000: state=Stopped err=<nil>
	W0513 17:43:54.923996   38470 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:43:54.929568   38470 out.go:177] * Restarting existing qemu2 VM for "embed-certs-026000" ...
	I0513 17:43:54.944762   38470 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:62:a0:24:a4:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/embed-certs-026000/disk.qcow2
	I0513 17:43:54.956047   38470 main.go:141] libmachine: STDOUT: 
	I0513 17:43:54.956126   38470 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:54.956239   38470 fix.go:56] duration metric: took 32.940625ms for fixHost
	I0513 17:43:54.956262   38470 start.go:83] releasing machines lock for "embed-certs-026000", held for 33.069542ms
	W0513 17:43:54.956455   38470 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-026000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-026000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:54.965693   38470 out.go:177] 
	W0513 17:43:54.970664   38470 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:43:54.970684   38470 out.go:239] * 
	* 
	W0513 17:43:54.972242   38470 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:43:54.988443   38470 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-026000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (58.516833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-271000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000: exit status 7 (30.66475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-271000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-271000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-271000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.914917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-271000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-271000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000: exit status 7 (28.697417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-271000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000: exit status 7 (27.988875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-271000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-271000 --alsologtostderr -v=1: exit status 83 (40.352541ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-271000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-271000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:43:51.759444   38489 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:43:51.759866   38489 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:51.759870   38489 out.go:304] Setting ErrFile to fd 2...
	I0513 17:43:51.759872   38489 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:51.760029   38489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:43:51.760262   38489 out.go:298] Setting JSON to false
	I0513 17:43:51.760270   38489 mustload.go:65] Loading cluster: old-k8s-version-271000
	I0513 17:43:51.760467   38489 config.go:182] Loaded profile config "old-k8s-version-271000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0513 17:43:51.764777   38489 out.go:177] * The control-plane node old-k8s-version-271000 host is not running: state=Stopped
	I0513 17:43:51.768791   38489 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-271000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-271000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000: exit status 7 (28.376625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-271000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000: exit status 7 (27.827209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-271000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-981000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-981000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.96683375s)

                                                
                                                
-- stdout --
	* [no-preload-981000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-981000" primary control-plane node in "no-preload-981000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-981000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:43:52.453263   38524 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:43:52.453397   38524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:52.453400   38524 out.go:304] Setting ErrFile to fd 2...
	I0513 17:43:52.453403   38524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:52.453523   38524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:43:52.454601   38524 out.go:298] Setting JSON to false
	I0513 17:43:52.470558   38524 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27802,"bootTime":1715619630,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:43:52.470628   38524 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:43:52.474992   38524 out.go:177] * [no-preload-981000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:43:52.482070   38524 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:43:52.482126   38524 notify.go:220] Checking for updates...
	I0513 17:43:52.489964   38524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:43:52.492983   38524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:43:52.495926   38524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:43:52.499017   38524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:43:52.501986   38524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:43:52.505346   38524 config.go:182] Loaded profile config "embed-certs-026000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:43:52.505405   38524 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:43:52.505461   38524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:43:52.509988   38524 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:43:52.516958   38524 start.go:297] selected driver: qemu2
	I0513 17:43:52.516967   38524 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:43:52.516978   38524 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:43:52.519242   38524 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:43:52.521922   38524 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:43:52.525069   38524 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:43:52.525088   38524 cni.go:84] Creating CNI manager for ""
	I0513 17:43:52.525094   38524 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:43:52.525099   38524 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:43:52.525133   38524 start.go:340] cluster config:
	{Name:no-preload-981000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-981000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMne
tPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:43:52.529593   38524 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:52.537971   38524 out.go:177] * Starting "no-preload-981000" primary control-plane node in "no-preload-981000" cluster
	I0513 17:43:52.540980   38524 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:43:52.541061   38524 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/no-preload-981000/config.json ...
	I0513 17:43:52.541082   38524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/no-preload-981000/config.json: {Name:mkd195dce680ba500d14dc69f934e7758d9f3a45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:43:52.541086   38524 cache.go:107] acquiring lock: {Name:mkc505811e97ace7cc0b74931a03e8484cd9f6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:52.541109   38524 cache.go:107] acquiring lock: {Name:mk5d133a9e8b618e6273c3126afd1a3513319a9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:52.541123   38524 cache.go:107] acquiring lock: {Name:mkc49cc30839e44703610b108a9c1d5156e21fe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:52.541122   38524 cache.go:107] acquiring lock: {Name:mk835c3c770d3611d3f77181b29eade95ffb2e8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:52.541134   38524 cache.go:107] acquiring lock: {Name:mkdc74f8a333aaa85e26e5b46063ae9a07acfdfc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:52.541146   38524 cache.go:107] acquiring lock: {Name:mk3fa9c0568228e56a578e7a48ba696f6f372110 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:52.541148   38524 cache.go:107] acquiring lock: {Name:mk90d9f90a8a3791f6c6f1bff44138e38efd3ef7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:52.541123   38524 cache.go:107] acquiring lock: {Name:mk56d0d31290bb39622156b37c1e82ddfbce8755 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:52.541298   38524 cache.go:115] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0513 17:43:52.541304   38524 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 196.834µs
	I0513 17:43:52.541325   38524 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0513 17:43:52.541351   38524 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0513 17:43:52.541361   38524 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0513 17:43:52.541381   38524 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0513 17:43:52.541514   38524 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0513 17:43:52.541600   38524 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0513 17:43:52.541616   38524 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0513 17:43:52.541730   38524 start.go:360] acquireMachinesLock for no-preload-981000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:52.541747   38524 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0513 17:43:52.541786   38524 start.go:364] duration metric: took 40.833µs to acquireMachinesLock for "no-preload-981000"
	I0513 17:43:52.541803   38524 start.go:93] Provisioning new machine with config: &{Name:no-preload-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-98100
0 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:43:52.541850   38524 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:43:52.550975   38524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:43:52.556526   38524 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0513 17:43:52.556569   38524 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0513 17:43:52.556522   38524 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0513 17:43:52.556644   38524 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0513 17:43:52.556872   38524 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0513 17:43:52.556863   38524 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0513 17:43:52.557159   38524 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0513 17:43:52.568978   38524 start.go:159] libmachine.API.Create for "no-preload-981000" (driver="qemu2")
	I0513 17:43:52.568997   38524 client.go:168] LocalClient.Create starting
	I0513 17:43:52.569079   38524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:43:52.569109   38524 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:52.569142   38524 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:52.569178   38524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:43:52.569201   38524 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:52.569209   38524 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:52.569520   38524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:43:52.716991   38524 main.go:141] libmachine: Creating SSH key...
	I0513 17:43:52.896806   38524 main.go:141] libmachine: Creating Disk image...
	I0513 17:43:52.896825   38524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:43:52.897010   38524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2
	I0513 17:43:52.909560   38524 main.go:141] libmachine: STDOUT: 
	I0513 17:43:52.909580   38524 main.go:141] libmachine: STDERR: 
	I0513 17:43:52.909625   38524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2 +20000M
	I0513 17:43:52.920895   38524 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:43:52.920910   38524 main.go:141] libmachine: STDERR: 
	I0513 17:43:52.920927   38524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2
	I0513 17:43:52.920931   38524 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:43:52.920958   38524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:d2:1e:fb:73:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2
	I0513 17:43:52.922697   38524 main.go:141] libmachine: STDOUT: 
	I0513 17:43:52.922766   38524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:52.922781   38524 client.go:171] duration metric: took 353.787875ms to LocalClient.Create
	I0513 17:43:52.972643   38524 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0513 17:43:52.981559   38524 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0513 17:43:52.987125   38524 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0513 17:43:52.989752   38524 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0
	I0513 17:43:53.016626   38524 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0
	I0513 17:43:53.037804   38524 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0
	I0513 17:43:53.081562   38524 cache.go:162] opening:  /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0513 17:43:53.097970   38524 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0513 17:43:53.097983   38524 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 556.87525ms
	I0513 17:43:53.097996   38524 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0513 17:43:54.548316   38524 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0513 17:43:54.548361   38524 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.007247584s
	I0513 17:43:54.548409   38524 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0513 17:43:54.922949   38524 start.go:128] duration metric: took 2.381121334s to createHost
	I0513 17:43:54.923016   38524 start.go:83] releasing machines lock for "no-preload-981000", held for 2.381266792s
	W0513 17:43:54.923114   38524 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:54.940554   38524 out.go:177] * Deleting "no-preload-981000" in qemu2 ...
	W0513 17:43:54.985098   38524 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:54.985172   38524 start.go:728] Will try again in 5 seconds ...
	I0513 17:43:56.014533   38524 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0513 17:43:56.014543   38524 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0" took 3.473489125s
	I0513 17:43:56.014549   38524 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0513 17:43:56.633365   38524 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0513 17:43:56.633452   38524 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0" took 4.092369875s
	I0513 17:43:56.633493   38524 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0513 17:43:57.277028   38524 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0513 17:43:57.277075   38524 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0" took 4.736079541s
	I0513 17:43:57.277170   38524 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0513 17:43:57.867629   38524 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0513 17:43:57.867677   38524 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0" took 5.326702292s
	I0513 17:43:57.867710   38524 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0513 17:43:59.985645   38524 start.go:360] acquireMachinesLock for no-preload-981000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:59.985920   38524 start.go:364] duration metric: took 215.875µs to acquireMachinesLock for "no-preload-981000"
	I0513 17:43:59.986036   38524 start.go:93] Provisioning new machine with config: &{Name:no-preload-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-98100
0 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:43:59.986221   38524 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:43:59.992434   38524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:44:00.033761   38524 start.go:159] libmachine.API.Create for "no-preload-981000" (driver="qemu2")
	I0513 17:44:00.033836   38524 client.go:168] LocalClient.Create starting
	I0513 17:44:00.033950   38524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:44:00.034063   38524 main.go:141] libmachine: Decoding PEM data...
	I0513 17:44:00.034088   38524 main.go:141] libmachine: Parsing certificate...
	I0513 17:44:00.034159   38524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:44:00.034211   38524 main.go:141] libmachine: Decoding PEM data...
	I0513 17:44:00.034227   38524 main.go:141] libmachine: Parsing certificate...
	I0513 17:44:00.034747   38524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:44:00.190752   38524 main.go:141] libmachine: Creating SSH key...
	I0513 17:44:00.315225   38524 main.go:141] libmachine: Creating Disk image...
	I0513 17:44:00.315231   38524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:44:00.315415   38524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2
	I0513 17:44:00.327996   38524 main.go:141] libmachine: STDOUT: 
	I0513 17:44:00.328018   38524 main.go:141] libmachine: STDERR: 
	I0513 17:44:00.328078   38524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2 +20000M
	I0513 17:44:00.338970   38524 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:44:00.338990   38524 main.go:141] libmachine: STDERR: 
	I0513 17:44:00.339002   38524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2
	I0513 17:44:00.339008   38524 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:44:00.339066   38524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:02:ad:46:0f:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2
	I0513 17:44:00.340852   38524 main.go:141] libmachine: STDOUT: 
	I0513 17:44:00.340882   38524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:44:00.340896   38524 client.go:171] duration metric: took 307.059ms to LocalClient.Create
	I0513 17:44:00.739904   38524 cache.go:157] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0513 17:44:00.739967   38524 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 8.198990625s
	I0513 17:44:00.739990   38524 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0513 17:44:00.740101   38524 cache.go:87] Successfully saved all images to host disk.
	I0513 17:44:02.341886   38524 start.go:128] duration metric: took 2.355655583s to createHost
	I0513 17:44:02.341933   38524 start.go:83] releasing machines lock for "no-preload-981000", held for 2.356037125s
	W0513 17:44:02.342222   38524 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-981000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-981000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:44:02.357815   38524 out.go:177] 
	W0513 17:44:02.366933   38524 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:44:02.367006   38524 out.go:239] * 
	* 
	W0513 17:44:02.369407   38524 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:44:02.382786   38524 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-981000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (64.373125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-026000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (29.993083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-026000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-026000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-026000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.084416ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-026000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-026000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (27.805583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-026000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (28.226083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-026000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-026000 --alsologtostderr -v=1: exit status 83 (42.518458ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-026000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-026000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:43:55.245953   38574 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:43:55.246086   38574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:55.246089   38574 out.go:304] Setting ErrFile to fd 2...
	I0513 17:43:55.246091   38574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:55.246227   38574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:43:55.246443   38574 out.go:298] Setting JSON to false
	I0513 17:43:55.246452   38574 mustload.go:65] Loading cluster: embed-certs-026000
	I0513 17:43:55.246643   38574 config.go:182] Loaded profile config "embed-certs-026000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:43:55.250635   38574 out.go:177] * The control-plane node embed-certs-026000 host is not running: state=Stopped
	I0513 17:43:55.258433   38574 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-026000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-026000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (27.740292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (27.659459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-730000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-730000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.892169916s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-730000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-730000" primary control-plane node in "default-k8s-diff-port-730000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-730000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:43:55.695638   38597 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:43:55.695773   38597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:55.695778   38597 out.go:304] Setting ErrFile to fd 2...
	I0513 17:43:55.695781   38597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:43:55.695902   38597 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:43:55.697119   38597 out.go:298] Setting JSON to false
	I0513 17:43:55.713363   38597 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27805,"bootTime":1715619630,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:43:55.713428   38597 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:43:55.717882   38597 out.go:177] * [default-k8s-diff-port-730000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:43:55.728811   38597 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:43:55.724764   38597 notify.go:220] Checking for updates...
	I0513 17:43:55.736776   38597 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:43:55.743830   38597 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:43:55.751788   38597 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:43:55.758779   38597 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:43:55.764325   38597 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:43:55.768130   38597 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:43:55.768207   38597 config.go:182] Loaded profile config "no-preload-981000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:43:55.768279   38597 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:43:55.771850   38597 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:43:55.778813   38597 start.go:297] selected driver: qemu2
	I0513 17:43:55.778820   38597 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:43:55.778830   38597 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:43:55.781156   38597 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:43:55.784830   38597 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:43:55.788903   38597 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:43:55.788922   38597 cni.go:84] Creating CNI manager for ""
	I0513 17:43:55.788929   38597 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:43:55.788934   38597 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:43:55.788971   38597 start.go:340] cluster config:
	{Name:default-k8s-diff-port-730000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_v
mnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:43:55.793775   38597 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:43:55.800765   38597 out.go:177] * Starting "default-k8s-diff-port-730000" primary control-plane node in "default-k8s-diff-port-730000" cluster
	I0513 17:43:55.803848   38597 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:43:55.803863   38597 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:43:55.803871   38597 cache.go:56] Caching tarball of preloaded images
	I0513 17:43:55.803934   38597 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:43:55.803941   38597 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:43:55.804010   38597 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/default-k8s-diff-port-730000/config.json ...
	I0513 17:43:55.804023   38597 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/default-k8s-diff-port-730000/config.json: {Name:mk744a40c99e099814979dfd7686fcacf3a0929a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:43:55.804256   38597 start.go:360] acquireMachinesLock for default-k8s-diff-port-730000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:43:55.804294   38597 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "default-k8s-diff-port-730000"
	I0513 17:43:55.804309   38597 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:defau
lt-k8s-diff-port-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:43:55.804341   38597 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:43:55.812800   38597 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:43:55.830978   38597 start.go:159] libmachine.API.Create for "default-k8s-diff-port-730000" (driver="qemu2")
	I0513 17:43:55.831008   38597 client.go:168] LocalClient.Create starting
	I0513 17:43:55.831070   38597 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:43:55.831103   38597 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:55.831119   38597 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:55.831161   38597 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:43:55.831185   38597 main.go:141] libmachine: Decoding PEM data...
	I0513 17:43:55.831192   38597 main.go:141] libmachine: Parsing certificate...
	I0513 17:43:55.831552   38597 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:43:55.976684   38597 main.go:141] libmachine: Creating SSH key...
	I0513 17:43:56.100706   38597 main.go:141] libmachine: Creating Disk image...
	I0513 17:43:56.100713   38597 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:43:56.100886   38597 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0513 17:43:56.113677   38597 main.go:141] libmachine: STDOUT: 
	I0513 17:43:56.113696   38597 main.go:141] libmachine: STDERR: 
	I0513 17:43:56.113756   38597 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2 +20000M
	I0513 17:43:56.125102   38597 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:43:56.125120   38597 main.go:141] libmachine: STDERR: 
	I0513 17:43:56.125139   38597 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0513 17:43:56.125143   38597 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:43:56.125187   38597 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:9e:fd:78:ef:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0513 17:43:56.126949   38597 main.go:141] libmachine: STDOUT: 
	I0513 17:43:56.126964   38597 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:43:56.126979   38597 client.go:171] duration metric: took 295.972084ms to LocalClient.Create
	I0513 17:43:58.129117   38597 start.go:128] duration metric: took 2.324798791s to createHost
	I0513 17:43:58.129177   38597 start.go:83] releasing machines lock for "default-k8s-diff-port-730000", held for 2.324907334s
	W0513 17:43:58.129268   38597 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:58.137494   38597 out.go:177] * Deleting "default-k8s-diff-port-730000" in qemu2 ...
	W0513 17:43:58.165745   38597 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:43:58.165798   38597 start.go:728] Will try again in 5 seconds ...
	I0513 17:44:03.167925   38597 start.go:360] acquireMachinesLock for default-k8s-diff-port-730000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:44:03.168378   38597 start.go:364] duration metric: took 329.541µs to acquireMachinesLock for "default-k8s-diff-port-730000"
	I0513 17:44:03.168565   38597 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:defau
lt-k8s-diff-port-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:44:03.168838   38597 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:44:03.174553   38597 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:44:03.221262   38597 start.go:159] libmachine.API.Create for "default-k8s-diff-port-730000" (driver="qemu2")
	I0513 17:44:03.221314   38597 client.go:168] LocalClient.Create starting
	I0513 17:44:03.221421   38597 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:44:03.221476   38597 main.go:141] libmachine: Decoding PEM data...
	I0513 17:44:03.221496   38597 main.go:141] libmachine: Parsing certificate...
	I0513 17:44:03.221571   38597 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:44:03.221604   38597 main.go:141] libmachine: Decoding PEM data...
	I0513 17:44:03.221617   38597 main.go:141] libmachine: Parsing certificate...
	I0513 17:44:03.222273   38597 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:44:03.392278   38597 main.go:141] libmachine: Creating SSH key...
	I0513 17:44:03.486114   38597 main.go:141] libmachine: Creating Disk image...
	I0513 17:44:03.486120   38597 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:44:03.486308   38597 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0513 17:44:03.499042   38597 main.go:141] libmachine: STDOUT: 
	I0513 17:44:03.499064   38597 main.go:141] libmachine: STDERR: 
	I0513 17:44:03.499122   38597 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2 +20000M
	I0513 17:44:03.510130   38597 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:44:03.510154   38597 main.go:141] libmachine: STDERR: 
	I0513 17:44:03.510172   38597 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0513 17:44:03.510176   38597 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:44:03.510212   38597 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:cd:10:b2:19:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0513 17:44:03.511862   38597 main.go:141] libmachine: STDOUT: 
	I0513 17:44:03.511886   38597 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:44:03.511899   38597 client.go:171] duration metric: took 290.585166ms to LocalClient.Create
	I0513 17:44:05.514054   38597 start.go:128] duration metric: took 2.345234458s to createHost
	I0513 17:44:05.514115   38597 start.go:83] releasing machines lock for "default-k8s-diff-port-730000", held for 2.345757459s
	W0513 17:44:05.514488   38597 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:44:05.525137   38597 out.go:177] 
	W0513 17:44:05.533286   38597 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:44:05.533331   38597 out.go:239] * 
	* 
	W0513 17:44:05.535849   38597 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:44:05.546176   38597 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-730000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (63.615166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-981000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-981000 create -f testdata/busybox.yaml: exit status 1 (30.64825ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-981000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-981000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (27.773333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (28.130875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-981000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-981000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-981000 describe deploy/metrics-server -n kube-system: exit status 1 (27.097375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-981000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-981000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (27.796166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-730000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-730000 create -f testdata/busybox.yaml: exit status 1 (29.657083ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-730000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-730000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (28.035916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (28.115167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-730000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-730000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-730000 describe deploy/metrics-server -n kube-system: exit status 1 (27.343625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-730000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-730000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (27.852708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-981000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-981000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.1809635s)

                                                
                                                
-- stdout --
	* [no-preload-981000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-981000" primary control-plane node in "no-preload-981000" cluster
	* Restarting existing qemu2 VM for "no-preload-981000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-981000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:44:06.158535   38667 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:44:06.158638   38667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:06.158640   38667 out.go:304] Setting ErrFile to fd 2...
	I0513 17:44:06.158643   38667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:06.158771   38667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:44:06.159752   38667 out.go:298] Setting JSON to false
	I0513 17:44:06.175723   38667 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27816,"bootTime":1715619630,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:44:06.175800   38667 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:44:06.181038   38667 out.go:177] * [no-preload-981000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:44:06.187930   38667 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:44:06.188032   38667 notify.go:220] Checking for updates...
	I0513 17:44:06.191977   38667 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:44:06.195930   38667 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:44:06.198899   38667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:44:06.202047   38667 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:44:06.204910   38667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:44:06.208225   38667 config.go:182] Loaded profile config "no-preload-981000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:44:06.208498   38667 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:44:06.212909   38667 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:44:06.219880   38667 start.go:297] selected driver: qemu2
	I0513 17:44:06.219888   38667 start.go:901] validating driver "qemu2" against &{Name:no-preload-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-981000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:44:06.219953   38667 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:44:06.222230   38667 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:44:06.222254   38667 cni.go:84] Creating CNI manager for ""
	I0513 17:44:06.222265   38667 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:44:06.222285   38667 start.go:340] cluster config:
	{Name:no-preload-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-981000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:44:06.226449   38667 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:44:06.234877   38667 out.go:177] * Starting "no-preload-981000" primary control-plane node in "no-preload-981000" cluster
	I0513 17:44:06.237938   38667 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:44:06.237999   38667 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/no-preload-981000/config.json ...
	I0513 17:44:06.238019   38667 cache.go:107] acquiring lock: {Name:mk5d133a9e8b618e6273c3126afd1a3513319a9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:44:06.238019   38667 cache.go:107] acquiring lock: {Name:mkc505811e97ace7cc0b74931a03e8484cd9f6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:44:06.238024   38667 cache.go:107] acquiring lock: {Name:mk835c3c770d3611d3f77181b29eade95ffb2e8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:44:06.238076   38667 cache.go:115] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0513 17:44:06.238078   38667 cache.go:115] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0513 17:44:06.238085   38667 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 67.625µs
	I0513 17:44:06.238086   38667 cache.go:115] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0513 17:44:06.238092   38667 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0513 17:44:06.238085   38667 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0" took 70.875µs
	I0513 17:44:06.238093   38667 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0" took 77.042µs
	I0513 17:44:06.238098   38667 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0513 17:44:06.238095   38667 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0513 17:44:06.238094   38667 cache.go:107] acquiring lock: {Name:mkdc74f8a333aaa85e26e5b46063ae9a07acfdfc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:44:06.238100   38667 cache.go:107] acquiring lock: {Name:mk90d9f90a8a3791f6c6f1bff44138e38efd3ef7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:44:06.238104   38667 cache.go:107] acquiring lock: {Name:mkc49cc30839e44703610b108a9c1d5156e21fe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:44:06.238136   38667 cache.go:107] acquiring lock: {Name:mk56d0d31290bb39622156b37c1e82ddfbce8755 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:44:06.238150   38667 cache.go:115] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0513 17:44:06.238152   38667 cache.go:115] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0513 17:44:06.238153   38667 cache.go:115] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0513 17:44:06.238154   38667 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 60.25µs
	I0513 17:44:06.238156   38667 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0" took 52µs
	I0513 17:44:06.238157   38667 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 57µs
	I0513 17:44:06.238160   38667 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0513 17:44:06.238160   38667 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0513 17:44:06.238157   38667 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0513 17:44:06.238192   38667 cache.go:115] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0513 17:44:06.238197   38667 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 85.916µs
	I0513 17:44:06.238206   38667 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0513 17:44:06.238208   38667 cache.go:107] acquiring lock: {Name:mk3fa9c0568228e56a578e7a48ba696f6f372110 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:44:06.238253   38667 cache.go:115] /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0513 17:44:06.238257   38667 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0" took 69.75µs
	I0513 17:44:06.238264   38667 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0513 17:44:06.238269   38667 cache.go:87] Successfully saved all images to host disk.
	I0513 17:44:06.238434   38667 start.go:360] acquireMachinesLock for no-preload-981000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:44:06.238461   38667 start.go:364] duration metric: took 22.375µs to acquireMachinesLock for "no-preload-981000"
	I0513 17:44:06.238471   38667 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:44:06.238475   38667 fix.go:54] fixHost starting: 
	I0513 17:44:06.238591   38667 fix.go:112] recreateIfNeeded on no-preload-981000: state=Stopped err=<nil>
	W0513 17:44:06.238600   38667 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:44:06.246935   38667 out.go:177] * Restarting existing qemu2 VM for "no-preload-981000" ...
	I0513 17:44:06.249901   38667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:02:ad:46:0f:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2
	I0513 17:44:06.251876   38667 main.go:141] libmachine: STDOUT: 
	I0513 17:44:06.251897   38667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:44:06.251928   38667 fix.go:56] duration metric: took 13.454125ms for fixHost
	I0513 17:44:06.251932   38667 start.go:83] releasing machines lock for "no-preload-981000", held for 13.46725ms
	W0513 17:44:06.251939   38667 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:44:06.251974   38667 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:44:06.251979   38667 start.go:728] Will try again in 5 seconds ...
	I0513 17:44:11.254054   38667 start.go:360] acquireMachinesLock for no-preload-981000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:44:11.254430   38667 start.go:364] duration metric: took 289.583µs to acquireMachinesLock for "no-preload-981000"
	I0513 17:44:11.254555   38667 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:44:11.254572   38667 fix.go:54] fixHost starting: 
	I0513 17:44:11.255324   38667 fix.go:112] recreateIfNeeded on no-preload-981000: state=Stopped err=<nil>
	W0513 17:44:11.255351   38667 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:44:11.263697   38667 out.go:177] * Restarting existing qemu2 VM for "no-preload-981000" ...
	I0513 17:44:11.267932   38667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:02:ad:46:0f:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/no-preload-981000/disk.qcow2
	I0513 17:44:11.276854   38667 main.go:141] libmachine: STDOUT: 
	I0513 17:44:11.276924   38667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:44:11.276993   38667 fix.go:56] duration metric: took 22.419125ms for fixHost
	I0513 17:44:11.277010   38667 start.go:83] releasing machines lock for "no-preload-981000", held for 22.557ms
	W0513 17:44:11.277169   38667 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-981000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-981000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:44:11.284715   38667 out.go:177] 
	W0513 17:44:11.288626   38667 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:44:11.288651   38667 out.go:239] * 
	* 
	W0513 17:44:11.291567   38667 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:44:11.298655   38667 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-981000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (65.587583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-730000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-730000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.184629625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-730000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-730000" primary control-plane node in "default-k8s-diff-port-730000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:44:09.421399   38694 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:44:09.421513   38694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:09.421516   38694 out.go:304] Setting ErrFile to fd 2...
	I0513 17:44:09.421519   38694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:09.421646   38694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:44:09.422645   38694 out.go:298] Setting JSON to false
	I0513 17:44:09.438613   38694 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27819,"bootTime":1715619630,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:44:09.438679   38694 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:44:09.443811   38694 out.go:177] * [default-k8s-diff-port-730000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:44:09.450800   38694 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:44:09.450871   38694 notify.go:220] Checking for updates...
	I0513 17:44:09.454825   38694 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:44:09.457832   38694 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:44:09.460752   38694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:44:09.463788   38694 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:44:09.466792   38694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:44:09.470062   38694 config.go:182] Loaded profile config "default-k8s-diff-port-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:44:09.470324   38694 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:44:09.474765   38694 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:44:09.481751   38694 start.go:297] selected driver: qemu2
	I0513 17:44:09.481760   38694 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-
k8s-diff-port-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:44:09.481815   38694 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:44:09.483948   38694 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0513 17:44:09.483977   38694 cni.go:84] Creating CNI manager for ""
	I0513 17:44:09.483984   38694 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:44:09.484009   38694 start.go:340] cluster config:
	{Name:default-k8s-diff-port-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-730000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:44:09.488139   38694 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:44:09.493769   38694 out.go:177] * Starting "default-k8s-diff-port-730000" primary control-plane node in "default-k8s-diff-port-730000" cluster
	I0513 17:44:09.497755   38694 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:44:09.497768   38694 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:44:09.497781   38694 cache.go:56] Caching tarball of preloaded images
	I0513 17:44:09.497836   38694 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:44:09.497841   38694 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:44:09.497894   38694 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/default-k8s-diff-port-730000/config.json ...
	I0513 17:44:09.498374   38694 start.go:360] acquireMachinesLock for default-k8s-diff-port-730000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:44:09.498407   38694 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "default-k8s-diff-port-730000"
	I0513 17:44:09.498416   38694 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:44:09.498421   38694 fix.go:54] fixHost starting: 
	I0513 17:44:09.498533   38694 fix.go:112] recreateIfNeeded on default-k8s-diff-port-730000: state=Stopped err=<nil>
	W0513 17:44:09.498541   38694 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:44:09.501832   38694 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-730000" ...
	I0513 17:44:09.509792   38694 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:cd:10:b2:19:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0513 17:44:09.511630   38694 main.go:141] libmachine: STDOUT: 
	I0513 17:44:09.511649   38694 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:44:09.511675   38694 fix.go:56] duration metric: took 13.25475ms for fixHost
	I0513 17:44:09.511679   38694 start.go:83] releasing machines lock for "default-k8s-diff-port-730000", held for 13.268667ms
	W0513 17:44:09.511685   38694 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:44:09.511717   38694 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:44:09.511721   38694 start.go:728] Will try again in 5 seconds ...
	I0513 17:44:14.513807   38694 start.go:360] acquireMachinesLock for default-k8s-diff-port-730000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:44:14.514224   38694 start.go:364] duration metric: took 332.75µs to acquireMachinesLock for "default-k8s-diff-port-730000"
	I0513 17:44:14.514352   38694 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:44:14.514374   38694 fix.go:54] fixHost starting: 
	I0513 17:44:14.515208   38694 fix.go:112] recreateIfNeeded on default-k8s-diff-port-730000: state=Stopped err=<nil>
	W0513 17:44:14.515236   38694 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:44:14.530920   38694 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-730000" ...
	I0513 17:44:14.534871   38694 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:cd:10:b2:19:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/default-k8s-diff-port-730000/disk.qcow2
	I0513 17:44:14.544053   38694 main.go:141] libmachine: STDOUT: 
	I0513 17:44:14.544110   38694 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:44:14.544200   38694 fix.go:56] duration metric: took 29.830166ms for fixHost
	I0513 17:44:14.544264   38694 start.go:83] releasing machines lock for "default-k8s-diff-port-730000", held for 29.976833ms
	W0513 17:44:14.544444   38694 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-730000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-730000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:44:14.552640   38694 out.go:177] 
	W0513 17:44:14.555657   38694 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:44:14.555675   38694 out.go:239] * 
	* 
	W0513 17:44:14.557578   38694 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:44:14.566456   38694 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-730000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (69.010042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-981000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (31.12725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-981000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-981000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-981000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.787958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-981000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-981000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (28.287916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-981000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (28.150833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-981000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-981000 --alsologtostderr -v=1: exit status 83 (40.218625ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-981000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-981000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:44:11.559280   38713 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:44:11.559432   38713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:11.559435   38713 out.go:304] Setting ErrFile to fd 2...
	I0513 17:44:11.559437   38713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:11.559566   38713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:44:11.559784   38713 out.go:298] Setting JSON to false
	I0513 17:44:11.559792   38713 mustload.go:65] Loading cluster: no-preload-981000
	I0513 17:44:11.559971   38713 config.go:182] Loaded profile config "no-preload-981000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:44:11.563611   38713 out.go:177] * The control-plane node no-preload-981000 host is not running: state=Stopped
	I0513 17:44:11.567570   38713 out.go:177]   To start a cluster, run: "minikube start -p no-preload-981000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-981000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (28.115875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (27.79375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-981000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-026000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-026000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.763237708s)

                                                
                                                
-- stdout --
	* [newest-cni-026000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-026000" primary control-plane node in "newest-cni-026000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-026000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:44:12.019021   38736 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:44:12.019153   38736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:12.019156   38736 out.go:304] Setting ErrFile to fd 2...
	I0513 17:44:12.019158   38736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:12.019285   38736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:44:12.020431   38736 out.go:298] Setting JSON to false
	I0513 17:44:12.036793   38736 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27822,"bootTime":1715619630,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:44:12.036873   38736 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:44:12.042040   38736 out.go:177] * [newest-cni-026000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:44:12.049955   38736 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:44:12.053949   38736 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:44:12.050000   38736 notify.go:220] Checking for updates...
	I0513 17:44:12.058890   38736 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:44:12.061980   38736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:44:12.064961   38736 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:44:12.067969   38736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:44:12.071311   38736 config.go:182] Loaded profile config "default-k8s-diff-port-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:44:12.071372   38736 config.go:182] Loaded profile config "multinode-126000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:44:12.071423   38736 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:44:12.075960   38736 out.go:177] * Using the qemu2 driver based on user configuration
	I0513 17:44:12.082883   38736 start.go:297] selected driver: qemu2
	I0513 17:44:12.082890   38736 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:44:12.082897   38736 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:44:12.085086   38736 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0513 17:44:12.085113   38736 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0513 17:44:12.092903   38736 out.go:177] * Automatically selected the socket_vmnet network
	I0513 17:44:12.095997   38736 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0513 17:44:12.096011   38736 cni.go:84] Creating CNI manager for ""
	I0513 17:44:12.096019   38736 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:44:12.096023   38736 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:44:12.096055   38736 start.go:340] cluster config:
	{Name:newest-cni-026000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-026000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:44:12.100689   38736 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:44:12.107971   38736 out.go:177] * Starting "newest-cni-026000" primary control-plane node in "newest-cni-026000" cluster
	I0513 17:44:12.111937   38736 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:44:12.111951   38736 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:44:12.111961   38736 cache.go:56] Caching tarball of preloaded images
	I0513 17:44:12.112016   38736 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:44:12.112021   38736 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:44:12.112086   38736 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/newest-cni-026000/config.json ...
	I0513 17:44:12.112098   38736 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/newest-cni-026000/config.json: {Name:mk4589e7c573127e8605d5f955b54d5ab9376017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:44:12.112502   38736 start.go:360] acquireMachinesLock for newest-cni-026000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:44:12.112538   38736 start.go:364] duration metric: took 29.583µs to acquireMachinesLock for "newest-cni-026000"
	I0513 17:44:12.112552   38736 start.go:93] Provisioning new machine with config: &{Name:newest-cni-026000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-02600
0 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:44:12.112592   38736 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:44:12.120973   38736 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:44:12.138699   38736 start.go:159] libmachine.API.Create for "newest-cni-026000" (driver="qemu2")
	I0513 17:44:12.138726   38736 client.go:168] LocalClient.Create starting
	I0513 17:44:12.138790   38736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:44:12.138828   38736 main.go:141] libmachine: Decoding PEM data...
	I0513 17:44:12.138841   38736 main.go:141] libmachine: Parsing certificate...
	I0513 17:44:12.138883   38736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:44:12.138911   38736 main.go:141] libmachine: Decoding PEM data...
	I0513 17:44:12.138918   38736 main.go:141] libmachine: Parsing certificate...
	I0513 17:44:12.139304   38736 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:44:12.286458   38736 main.go:141] libmachine: Creating SSH key...
	I0513 17:44:12.338075   38736 main.go:141] libmachine: Creating Disk image...
	I0513 17:44:12.338081   38736 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:44:12.338273   38736 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2
	I0513 17:44:12.350653   38736 main.go:141] libmachine: STDOUT: 
	I0513 17:44:12.350686   38736 main.go:141] libmachine: STDERR: 
	I0513 17:44:12.350731   38736 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2 +20000M
	I0513 17:44:12.361553   38736 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:44:12.361571   38736 main.go:141] libmachine: STDERR: 
	I0513 17:44:12.361582   38736 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2
	I0513 17:44:12.361587   38736 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:44:12.361620   38736 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:16:94:8a:5b:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2
	I0513 17:44:12.363356   38736 main.go:141] libmachine: STDOUT: 
	I0513 17:44:12.363374   38736 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:44:12.363392   38736 client.go:171] duration metric: took 224.665208ms to LocalClient.Create
	I0513 17:44:14.365575   38736 start.go:128] duration metric: took 2.253006708s to createHost
	I0513 17:44:14.365697   38736 start.go:83] releasing machines lock for "newest-cni-026000", held for 2.253192041s
	W0513 17:44:14.365757   38736 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:44:14.378914   38736 out.go:177] * Deleting "newest-cni-026000" in qemu2 ...
	W0513 17:44:14.410140   38736 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:44:14.410170   38736 start.go:728] Will try again in 5 seconds ...
	I0513 17:44:19.412253   38736 start.go:360] acquireMachinesLock for newest-cni-026000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:44:19.412711   38736 start.go:364] duration metric: took 377.167µs to acquireMachinesLock for "newest-cni-026000"
	I0513 17:44:19.412820   38736 start.go:93] Provisioning new machine with config: &{Name:newest-cni-026000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-02600
0 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0513 17:44:19.413223   38736 start.go:125] createHost starting for "" (driver="qemu2")
	I0513 17:44:19.418956   38736 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0513 17:44:19.467600   38736 start.go:159] libmachine.API.Create for "newest-cni-026000" (driver="qemu2")
	I0513 17:44:19.467656   38736 client.go:168] LocalClient.Create starting
	I0513 17:44:19.467774   38736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/ca.pem
	I0513 17:44:19.467840   38736 main.go:141] libmachine: Decoding PEM data...
	I0513 17:44:19.467858   38736 main.go:141] libmachine: Parsing certificate...
	I0513 17:44:19.467915   38736 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18872-34554/.minikube/certs/cert.pem
	I0513 17:44:19.467959   38736 main.go:141] libmachine: Decoding PEM data...
	I0513 17:44:19.467971   38736 main.go:141] libmachine: Parsing certificate...
	I0513 17:44:19.468502   38736 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso...
	I0513 17:44:19.625462   38736 main.go:141] libmachine: Creating SSH key...
	I0513 17:44:19.683423   38736 main.go:141] libmachine: Creating Disk image...
	I0513 17:44:19.683433   38736 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0513 17:44:19.683615   38736 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2.raw /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2
	I0513 17:44:19.696480   38736 main.go:141] libmachine: STDOUT: 
	I0513 17:44:19.696496   38736 main.go:141] libmachine: STDERR: 
	I0513 17:44:19.696550   38736 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2 +20000M
	I0513 17:44:19.707845   38736 main.go:141] libmachine: STDOUT: Image resized.
	
	I0513 17:44:19.707861   38736 main.go:141] libmachine: STDERR: 
	I0513 17:44:19.707872   38736 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2
	I0513 17:44:19.707877   38736 main.go:141] libmachine: Starting QEMU VM...
	I0513 17:44:19.707919   38736 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:93:6e:1f:1e:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2
	I0513 17:44:19.709650   38736 main.go:141] libmachine: STDOUT: 
	I0513 17:44:19.709670   38736 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:44:19.709681   38736 client.go:171] duration metric: took 242.024541ms to LocalClient.Create
	I0513 17:44:21.711819   38736 start.go:128] duration metric: took 2.298600333s to createHost
	I0513 17:44:21.711933   38736 start.go:83] releasing machines lock for "newest-cni-026000", held for 2.299201458s
	W0513 17:44:21.712255   38736 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-026000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-026000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:44:21.721889   38736 out.go:177] 
	W0513 17:44:21.728957   38736 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:44:21.728991   38736 out.go:239] * 
	* 
	W0513 17:44:21.731503   38736 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:44:21.745802   38736 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-026000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-026000 -n newest-cni-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-026000 -n newest-cni-026000: exit status 7 (67.811875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-730000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (31.635917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-730000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-730000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-730000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.7745ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-730000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-730000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (27.983333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-730000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (27.876083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-730000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-730000 --alsologtostderr -v=1: exit status 83 (47.65125ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-730000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-730000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:44:14.830084   38758 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:44:14.830239   38758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:14.830242   38758 out.go:304] Setting ErrFile to fd 2...
	I0513 17:44:14.830245   38758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:14.830386   38758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:44:14.830625   38758 out.go:298] Setting JSON to false
	I0513 17:44:14.830632   38758 mustload.go:65] Loading cluster: default-k8s-diff-port-730000
	I0513 17:44:14.830856   38758 config.go:182] Loaded profile config "default-k8s-diff-port-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:44:14.842298   38758 out.go:177] * The control-plane node default-k8s-diff-port-730000 host is not running: state=Stopped
	I0513 17:44:14.845701   38758 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-730000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-730000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (27.706625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (27.909958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-730000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-026000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-026000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.191295084s)

                                                
                                                
-- stdout --
	* [newest-cni-026000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-026000" primary control-plane node in "newest-cni-026000" cluster
	* Restarting existing qemu2 VM for "newest-cni-026000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-026000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:44:25.361469   38811 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:44:25.361587   38811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:25.361590   38811 out.go:304] Setting ErrFile to fd 2...
	I0513 17:44:25.361593   38811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:25.361704   38811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:44:25.362726   38811 out.go:298] Setting JSON to false
	I0513 17:44:25.378976   38811 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":27835,"bootTime":1715619630,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:44:25.379046   38811 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:44:25.383401   38811 out.go:177] * [newest-cni-026000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:44:25.390354   38811 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:44:25.390398   38811 notify.go:220] Checking for updates...
	I0513 17:44:25.397341   38811 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:44:25.400305   38811 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:44:25.407378   38811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:44:25.414294   38811 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:44:25.422267   38811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:44:25.425655   38811 config.go:182] Loaded profile config "newest-cni-026000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:44:25.425951   38811 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:44:25.430328   38811 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:44:25.437286   38811 start.go:297] selected driver: qemu2
	I0513 17:44:25.437293   38811 start.go:901] validating driver "qemu2" against &{Name:newest-cni-026000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-026000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:44:25.437335   38811 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:44:25.439734   38811 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0513 17:44:25.439759   38811 cni.go:84] Creating CNI manager for ""
	I0513 17:44:25.439768   38811 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:44:25.439789   38811 start.go:340] cluster config:
	{Name:newest-cni-026000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-026000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:44:25.444102   38811 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:44:25.451307   38811 out.go:177] * Starting "newest-cni-026000" primary control-plane node in "newest-cni-026000" cluster
	I0513 17:44:25.455298   38811 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:44:25.455315   38811 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:44:25.455326   38811 cache.go:56] Caching tarball of preloaded images
	I0513 17:44:25.455389   38811 preload.go:173] Found /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0513 17:44:25.455395   38811 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0513 17:44:25.455470   38811 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/newest-cni-026000/config.json ...
	I0513 17:44:25.455990   38811 start.go:360] acquireMachinesLock for newest-cni-026000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:44:25.456022   38811 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "newest-cni-026000"
	I0513 17:44:25.456033   38811 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:44:25.456038   38811 fix.go:54] fixHost starting: 
	I0513 17:44:25.456166   38811 fix.go:112] recreateIfNeeded on newest-cni-026000: state=Stopped err=<nil>
	W0513 17:44:25.456175   38811 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:44:25.464378   38811 out.go:177] * Restarting existing qemu2 VM for "newest-cni-026000" ...
	I0513 17:44:25.468348   38811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:93:6e:1f:1e:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2
	I0513 17:44:25.470347   38811 main.go:141] libmachine: STDOUT: 
	I0513 17:44:25.470370   38811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:44:25.470398   38811 fix.go:56] duration metric: took 14.360458ms for fixHost
	I0513 17:44:25.470402   38811 start.go:83] releasing machines lock for "newest-cni-026000", held for 14.375125ms
	W0513 17:44:25.470409   38811 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:44:25.470451   38811 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:44:25.470456   38811 start.go:728] Will try again in 5 seconds ...
	I0513 17:44:30.472502   38811 start.go:360] acquireMachinesLock for newest-cni-026000: {Name:mke99c1907ae6856edf7e2ef82c189d2488df5e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0513 17:44:30.472922   38811 start.go:364] duration metric: took 333.875µs to acquireMachinesLock for "newest-cni-026000"
	I0513 17:44:30.473053   38811 start.go:96] Skipping create...Using existing machine configuration
	I0513 17:44:30.473073   38811 fix.go:54] fixHost starting: 
	I0513 17:44:30.473747   38811 fix.go:112] recreateIfNeeded on newest-cni-026000: state=Stopped err=<nil>
	W0513 17:44:30.473774   38811 fix.go:138] unexpected machine state, will restart: <nil>
	I0513 17:44:30.478222   38811 out.go:177] * Restarting existing qemu2 VM for "newest-cni-026000" ...
	I0513 17:44:30.482401   38811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:93:6e:1f:1e:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18872-34554/.minikube/machines/newest-cni-026000/disk.qcow2
	I0513 17:44:30.491303   38811 main.go:141] libmachine: STDOUT: 
	I0513 17:44:30.491379   38811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0513 17:44:30.491472   38811 fix.go:56] duration metric: took 18.399042ms for fixHost
	I0513 17:44:30.491494   38811 start.go:83] releasing machines lock for "newest-cni-026000", held for 18.546875ms
	W0513 17:44:30.491689   38811 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-026000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-026000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0513 17:44:30.499186   38811 out.go:177] 
	W0513 17:44:30.502153   38811 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0513 17:44:30.502195   38811 out.go:239] * 
	* 
	W0513 17:44:30.505095   38811 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:44:30.512098   38811 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-026000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-026000 -n newest-cni-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-026000 -n newest-cni-026000: exit status 7 (66.950333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-026000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-026000 -n newest-cni-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-026000 -n newest-cni-026000: exit status 7 (29.584125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-026000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-026000 --alsologtostderr -v=1: exit status 83 (38.983417ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-026000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-026000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:44:30.695384   38825 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:44:30.695556   38825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:30.695558   38825 out.go:304] Setting ErrFile to fd 2...
	I0513 17:44:30.695561   38825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:44:30.695691   38825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:44:30.695922   38825 out.go:298] Setting JSON to false
	I0513 17:44:30.695929   38825 mustload.go:65] Loading cluster: newest-cni-026000
	I0513 17:44:30.696135   38825 config.go:182] Loaded profile config "newest-cni-026000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:44:30.700082   38825 out.go:177] * The control-plane node newest-cni-026000 host is not running: state=Stopped
	I0513 17:44:30.701206   38825 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-026000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-026000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-026000 -n newest-cni-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-026000 -n newest-cni-026000: exit status 7 (29.365667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-026000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-026000 -n newest-cni-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-026000 -n newest-cni-026000: exit status 7 (29.292583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.30.0/json-events 9.4
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.08
18 TestDownloadOnly/v1.30.0/DeleteAll 0.23
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.36
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.22
39 TestErrorSpam/start 0.38
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.11
43 TestErrorSpam/stop 8.8
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.69
55 TestFunctional/serial/CacheCmd/cache/add_local 1.2
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.27
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.12
93 TestFunctional/parallel/License 0.21
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.33
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
126 TestFunctional/parallel/ProfileCmd/profile_list 0.1
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.1
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.16
136 TestFunctional/delete_my-image_image 0.04
137 TestFunctional/delete_minikube_cached_images 0.04
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 1.8
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.32
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 1.06
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.48
258 TestNoKubernetes/serial/Stop 3.39
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
268 TestStoppedBinaryUpgrade/MinikubeLogs 0.67
277 TestStartStop/group/old-k8s-version/serial/Stop 2.15
278 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
282 TestStartStop/group/embed-certs/serial/Stop 3.12
283 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
299 TestStartStop/group/no-preload/serial/Stop 3.36
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.45
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 3.32
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-547000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-547000: exit status 85 (90.808459ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-547000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT |          |
	|         | -p download-only-547000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 17:18:14
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 17:18:14.164304   35058 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:18:14.164435   35058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:18:14.164445   35058 out.go:304] Setting ErrFile to fd 2...
	I0513 17:18:14.164448   35058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:18:14.164555   35058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	W0513 17:18:14.164639   35058 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18872-34554/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18872-34554/.minikube/config/config.json: no such file or directory
	I0513 17:18:14.165995   35058 out.go:298] Setting JSON to true
	I0513 17:18:14.182739   35058 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26264,"bootTime":1715619630,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:18:14.182818   35058 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:18:14.189915   35058 out.go:97] [download-only-547000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:18:14.192977   35058 out.go:169] MINIKUBE_LOCATION=18872
	I0513 17:18:14.190037   35058 notify.go:220] Checking for updates...
	W0513 17:18:14.190054   35058 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball: no such file or directory
	I0513 17:18:14.199863   35058 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:18:14.202901   35058 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:18:14.205822   35058 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:18:14.208906   35058 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	W0513 17:18:14.213327   35058 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0513 17:18:14.213517   35058 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:18:14.216896   35058 out.go:97] Using the qemu2 driver based on user configuration
	I0513 17:18:14.216914   35058 start.go:297] selected driver: qemu2
	I0513 17:18:14.216937   35058 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:18:14.217005   35058 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:18:14.219883   35058 out.go:169] Automatically selected the socket_vmnet network
	I0513 17:18:14.225913   35058 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0513 17:18:14.226016   35058 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0513 17:18:14.226095   35058 cni.go:84] Creating CNI manager for ""
	I0513 17:18:14.226113   35058 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0513 17:18:14.226174   35058 start.go:340] cluster config:
	{Name:download-only-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0513 17:18:14.231095   35058 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:18:14.235884   35058 out.go:97] Downloading VM boot image ...
	I0513 17:18:14.235909   35058 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/iso/arm64/minikube-v1.33.1-arm64.iso
	I0513 17:18:18.360722   35058 out.go:97] Starting "download-only-547000" primary control-plane node in "download-only-547000" cluster
	I0513 17:18:18.360769   35058 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 17:18:18.417474   35058 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0513 17:18:18.417500   35058 cache.go:56] Caching tarball of preloaded images
	I0513 17:18:18.417661   35058 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 17:18:18.422694   35058 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0513 17:18:18.422701   35058 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0513 17:18:18.494447   35058 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0513 17:18:23.694307   35058 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0513 17:18:23.694463   35058 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0513 17:18:24.390051   35058 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0513 17:18:24.390264   35058 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/download-only-547000/config.json ...
	I0513 17:18:24.390282   35058 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18872-34554/.minikube/profiles/download-only-547000/config.json: {Name:mk00910a7732fd1fca67979e6d1118b3602b6c2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0513 17:18:24.390529   35058 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0513 17:18:24.391402   35058 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0513 17:18:24.732496   35058 out.go:169] 
	W0513 17:18:24.738665   35058 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18872-34554/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108e55320 0x108e55320 0x108e55320 0x108e55320 0x108e55320 0x108e55320 0x108e55320] Decompressors:map[bz2:0x1400000f170 gz:0x1400000f178 tar:0x1400000f110 tar.bz2:0x1400000f120 tar.gz:0x1400000f140 tar.xz:0x1400000f150 tar.zst:0x1400000f160 tbz2:0x1400000f120 tgz:0x1400000f140 txz:0x1400000f150 tzst:0x1400000f160 xz:0x1400000f180 zip:0x1400000f190 zst:0x1400000f188] Getters:map[file:0x140015aa6f0 http:0x14000654280 https:0x140006542d0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0513 17:18:24.738686   35058 out_reason.go:110] 
	W0513 17:18:24.745502   35058 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0513 17:18:24.749489   35058 out.go:169] 
	
	
	* The control-plane node download-only-547000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-547000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-547000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (9.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-115000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-115000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=qemu2 : (9.395031666s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (9.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-115000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-115000: exit status 85 (80.1075ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-547000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | -p download-only-547000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
	| delete  | -p download-only-547000        | download-only-547000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT | 13 May 24 17:18 PDT |
	| start   | -o=json --download-only        | download-only-115000 | jenkins | v1.33.1 | 13 May 24 17:18 PDT |                     |
	|         | -p download-only-115000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/13 17:18:25
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0513 17:18:25.398929   35094 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:18:25.399076   35094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:18:25.399080   35094 out.go:304] Setting ErrFile to fd 2...
	I0513 17:18:25.399082   35094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:18:25.399198   35094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:18:25.400237   35094 out.go:298] Setting JSON to true
	I0513 17:18:25.416487   35094 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26275,"bootTime":1715619630,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:18:25.416559   35094 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:18:25.422211   35094 out.go:97] [download-only-115000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:18:25.426161   35094 out.go:169] MINIKUBE_LOCATION=18872
	I0513 17:18:25.422280   35094 notify.go:220] Checking for updates...
	I0513 17:18:25.433254   35094 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:18:25.436216   35094 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:18:25.439147   35094 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:18:25.442206   35094 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	W0513 17:18:25.448139   35094 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0513 17:18:25.448330   35094 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:18:25.451167   35094 out.go:97] Using the qemu2 driver based on user configuration
	I0513 17:18:25.451175   35094 start.go:297] selected driver: qemu2
	I0513 17:18:25.451179   35094 start.go:901] validating driver "qemu2" against <nil>
	I0513 17:18:25.451219   35094 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0513 17:18:25.454155   35094 out.go:169] Automatically selected the socket_vmnet network
	I0513 17:18:25.457405   35094 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0513 17:18:25.457496   35094 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0513 17:18:25.457517   35094 cni.go:84] Creating CNI manager for ""
	I0513 17:18:25.457524   35094 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0513 17:18:25.457530   35094 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0513 17:18:25.457579   35094 start.go:340] cluster config:
	{Name:download-only-115000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-115000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPU
s: AutoPauseInterval:1m0s}
	I0513 17:18:25.461828   35094 iso.go:125] acquiring lock: {Name:mkfb712f7114efa46d47dc8cb22a2ad068bc0b9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0513 17:18:25.465194   35094 out.go:97] Starting "download-only-115000" primary control-plane node in "download-only-115000" cluster
	I0513 17:18:25.465205   35094 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:18:25.525267   35094 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:18:25.525281   35094 cache.go:56] Caching tarball of preloaded images
	I0513 17:18:25.525495   35094 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0513 17:18:25.530297   35094 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0513 17:18:25.530305   35094 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0513 17:18:25.611153   35094 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4?checksum=md5:677034533668c42fec962cc52f9b3c42 -> /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0513 17:18:30.129173   35094 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0513 17:18:30.129329   35094 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18872-34554/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-115000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-115000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-115000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.36s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-248000 --alsologtostderr --binary-mirror http://127.0.0.1:55896 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-248000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-248000
--- PASS: TestBinaryMirror (0.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-521000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-521000: exit status 85 (56.291542ms)

                                                
                                                
-- stdout --
	* Profile "addons-521000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-521000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-521000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-521000: exit status 85 (60.224916ms)

                                                
                                                
-- stdout --
	* Profile "addons-521000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-521000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.22s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 status: exit status 7 (30.2195ms)

                                                
                                                
-- stdout --
	nospam-940000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 status: exit status 7 (28.738292ms)

                                                
                                                
-- stdout --
	nospam-940000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 status: exit status 7 (29.385083ms)

                                                
                                                
-- stdout --
	nospam-940000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 pause: exit status 83 (39.753916ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-940000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-940000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 pause: exit status 83 (38.9095ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-940000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-940000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 pause: exit status 83 (38.947208ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-940000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-940000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.11s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 unpause: exit status 83 (37.2125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-940000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-940000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 unpause: exit status 83 (37.345917ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-940000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-940000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 unpause: exit status 83 (38.743417ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-940000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-940000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.11s)

                                                
                                    
x
+
TestErrorSpam/stop (8.8s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 stop: (1.998367583s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 stop: (3.537171041s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 stop: (3.263884333s)
--- PASS: TestErrorSpam/stop (8.80s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18872-34554/.minikube/files/etc/test/nested/copy/35055/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-968000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2414369514/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cache add minikube-local-cache-test:functional-968000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 cache delete minikube-local-cache-test:functional-968000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-968000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 config get cpus: exit status 14 (28.738417ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 config get cpus: exit status 14 (36.209625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-968000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-968000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (158.37825ms)

                                                
                                                
-- stdout --
	* [functional-968000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:20:15.630367   35710 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:20:15.630536   35710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:20:15.630541   35710 out.go:304] Setting ErrFile to fd 2...
	I0513 17:20:15.630544   35710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:20:15.630724   35710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:20:15.632033   35710 out.go:298] Setting JSON to false
	I0513 17:20:15.652048   35710 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26385,"bootTime":1715619630,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:20:15.652187   35710 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:20:15.657980   35710 out.go:177] * [functional-968000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0513 17:20:15.665038   35710 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:20:15.665101   35710 notify.go:220] Checking for updates...
	I0513 17:20:15.669041   35710 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:20:15.672932   35710 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:20:15.676015   35710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:20:15.678894   35710 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:20:15.681975   35710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:20:15.685347   35710 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:20:15.685638   35710 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:20:15.689953   35710 out.go:177] * Using the qemu2 driver based on existing profile
	I0513 17:20:15.696937   35710 start.go:297] selected driver: qemu2
	I0513 17:20:15.696946   35710 start.go:901] validating driver "qemu2" against &{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-968000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:20:15.697004   35710 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:20:15.703972   35710 out.go:177] 
	W0513 17:20:15.707955   35710 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0513 17:20:15.711754   35710 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-968000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-968000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-968000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.829042ms)

                                                
                                                
-- stdout --
	* [functional-968000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0513 17:20:15.856323   35721 out.go:291] Setting OutFile to fd 1 ...
	I0513 17:20:15.856433   35721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:20:15.856436   35721 out.go:304] Setting ErrFile to fd 2...
	I0513 17:20:15.856438   35721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0513 17:20:15.856572   35721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18872-34554/.minikube/bin
	I0513 17:20:15.857980   35721 out.go:298] Setting JSON to false
	I0513 17:20:15.874647   35721 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":26385,"bootTime":1715619630,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0513 17:20:15.874744   35721 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0513 17:20:15.880040   35721 out.go:177] * [functional-968000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	I0513 17:20:15.886945   35721 out.go:177]   - MINIKUBE_LOCATION=18872
	I0513 17:20:15.886989   35721 notify.go:220] Checking for updates...
	I0513 17:20:15.894940   35721 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	I0513 17:20:15.897972   35721 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0513 17:20:15.900983   35721 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0513 17:20:15.903929   35721 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	I0513 17:20:15.906967   35721 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0513 17:20:15.910347   35721 config.go:182] Loaded profile config "functional-968000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0513 17:20:15.910616   35721 driver.go:392] Setting default libvirt URI to qemu:///system
	I0513 17:20:15.914910   35721 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0513 17:20:15.921911   35721 start.go:297] selected driver: qemu2
	I0513 17:20:15.921920   35721 start.go:901] validating driver "qemu2" against &{Name:functional-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-968000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0513 17:20:15.921976   35721 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0513 17:20:15.927983   35721 out.go:177] 
	W0513 17:20:15.931933   35721 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0513 17:20:15.934869   35721 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.295719958s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-968000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image rm gcr.io/google-containers/addon-resizer:functional-968000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-968000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 image save --daemon gcr.io/google-containers/addon-resizer:functional-968000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-968000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "68.903708ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "32.894458ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "68.640792ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.767792ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.011593041s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-968000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-968000
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-968000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-968000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-388000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-388000 --output=json --user=testUser: (1.7947135s)
--- PASS: TestJSONOutput/stop/Command (1.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-490000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-490000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.985042ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bd306809-48bd-4e07-93db-9dd57df683d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-490000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"19f8ac0d-b57c-491d-b77b-f1f75a173089","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18872"}}
	{"specversion":"1.0","id":"b0b83c64-3180-4b76-9a74-f2d1cf6ff149","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig"}}
	{"specversion":"1.0","id":"27c85b1d-9a23-4578-9a5b-ad462fcb698a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3e5e7db3-2a46-439a-8228-23b8d89aa833","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9b904806-0488-4804-9157-4973aa9fbff1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube"}}
	{"specversion":"1.0","id":"1bb7b51c-16fa-4d05-bb88-399640f047a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"215fa654-250b-49b6-a700-a69b61dee40d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-490000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-490000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-839000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (110.591375ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-839000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18872
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18872-34554/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18872-34554/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-839000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-839000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.024417ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-839000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-839000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.670184917s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.813649042s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-839000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-839000: (3.386413417s)
--- PASS: TestNoKubernetes/serial/Stop (3.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-839000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-839000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.483792ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-839000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-839000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-201000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-271000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-271000 --alsologtostderr -v=3: (2.1510285s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-271000 -n old-k8s-version-271000: exit status 7 (63.62025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-271000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-026000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-026000 --alsologtostderr -v=3: (3.119999125s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (58.03175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-026000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-981000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-981000 --alsologtostderr -v=3: (3.364525916s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-730000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-730000 --alsologtostderr -v=3: (3.452736042s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-981000 -n no-preload-981000: exit status 7 (44.298334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-981000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-730000 -n default-k8s-diff-port-730000: exit status 7 (54.672625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-730000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-026000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-026000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-026000 --alsologtostderr -v=3: (3.321327167s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-026000 -n newest-cni-026000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-026000 -n newest-cni-026000: exit status 7 (58.060541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-026000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port352069418/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1715645980278932000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port352069418/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1715645980278932000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port352069418/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1715645980278932000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port352069418/001/test-1715645980278932000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (54.165375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.083125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.2185ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.934167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.160416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.017209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (81.334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo umount -f /mount-9p": exit status 83 (46.593291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port352069418/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (10.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3354086331/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (55.255292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.774917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.494416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.191459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.984666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.606042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.313708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "sudo umount -f /mount-9p": exit status 83 (47.772833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-968000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3354086331/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (10.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4015309668/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4015309668/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4015309668/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (81.655125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (86.608709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (81.342666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (83.835625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (83.677417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (85.471666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-968000 ssh "findmnt -T" /mount1: exit status 83 (83.233542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-968000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-968000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4015309668/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4015309668/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-968000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4015309668/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.76s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-748000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-748000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-748000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-748000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-748000"

                                                
                                                
----------------------- debugLogs end: cilium-748000 [took: 2.25989725s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-748000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-748000
--- SKIP: TestNetworkPlugins/group/cilium (2.48s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-691000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-691000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard